00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1993 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3259 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.084 Using shallow fetch with depth 1 00:00:00.084 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.084 > git --version # timeout=10 00:00:00.116 > git --version # 'git version 2.39.2' 00:00:00.116 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.149 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.149 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.381 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.393 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.406 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:05.406 > git config core.sparsecheckout # timeout=10 00:00:05.416 > git read-tree -mu HEAD # timeout=10 00:00:05.432 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:05.451 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:05.452 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:05.580 [Pipeline] Start of Pipeline 00:00:05.595 [Pipeline] library 00:00:05.597 Loading library shm_lib@master 00:00:05.597 Library shm_lib@master is cached. Copying from home. 00:00:05.614 [Pipeline] node 00:00:05.634 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.635 [Pipeline] { 00:00:05.645 [Pipeline] catchError 00:00:05.647 [Pipeline] { 00:00:05.659 [Pipeline] wrap 00:00:05.666 [Pipeline] { 00:00:05.672 [Pipeline] stage 00:00:05.673 [Pipeline] { (Prologue) 00:00:05.839 [Pipeline] sh 00:00:06.692 + logger -p user.info -t JENKINS-CI 00:00:06.711 [Pipeline] echo 00:00:06.712 Node: GP11 00:00:06.717 [Pipeline] sh 00:00:07.046 [Pipeline] setCustomBuildProperty 00:00:07.067 [Pipeline] echo 00:00:07.068 Cleanup processes 00:00:07.074 [Pipeline] sh 00:00:07.370 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.370 9153 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.387 [Pipeline] sh 00:00:07.683 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.683 ++ grep -v 'sudo pgrep' 00:00:07.683 ++ awk '{print $1}' 00:00:07.683 + sudo kill -9 00:00:07.683 + true 00:00:07.702 [Pipeline] cleanWs 00:00:07.713 [WS-CLEANUP] Deleting project workspace... 00:00:07.713 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.727 [WS-CLEANUP] done 00:00:07.732 [Pipeline] setCustomBuildProperty 00:00:07.747 [Pipeline] sh 00:00:08.042 + sudo git config --global --replace-all safe.directory '*' 00:00:08.137 [Pipeline] httpRequest 00:00:09.903 [Pipeline] echo 00:00:09.904 Sorcerer 10.211.164.101 is alive 00:00:09.913 [Pipeline] httpRequest 00:00:09.918 HttpMethod: GET 00:00:09.919 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.921 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.943 Response Code: HTTP/1.1 200 OK 00:00:09.944 Success: Status code 200 is in the accepted range: 200,404 00:00:09.945 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:33.293 [Pipeline] sh 00:00:33.592 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:33.613 [Pipeline] httpRequest 00:00:33.651 [Pipeline] echo 00:00:33.653 Sorcerer 10.211.164.101 is alive 00:00:33.662 [Pipeline] httpRequest 00:00:33.670 HttpMethod: GET 00:00:33.671 URL: http://10.211.164.101/packages/spdk_e64f085ad59d93ad2dad78312b00a97bbd6394ab.tar.gz 00:00:33.675 Sending request to url: http://10.211.164.101/packages/spdk_e64f085ad59d93ad2dad78312b00a97bbd6394ab.tar.gz 00:00:33.687 Response Code: HTTP/1.1 200 OK 00:00:33.688 Success: Status code 200 is in the accepted range: 200,404 00:00:33.689 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e64f085ad59d93ad2dad78312b00a97bbd6394ab.tar.gz 00:01:25.843 [Pipeline] sh 00:01:26.132 + tar --no-same-owner -xf spdk_e64f085ad59d93ad2dad78312b00a97bbd6394ab.tar.gz 00:01:29.452 [Pipeline] sh 00:01:29.741 + git -C spdk log --oneline -n5 00:01:29.741 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:29.741 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:29.741 6c7c1f57e accel: add sequence outstanding stat 00:01:29.741 3bc8e6a26 accel: add utility to put task 00:01:29.741 2dba73997 accel: move get task utility 00:01:29.761 [Pipeline] withCredentials 00:01:29.773 > git --version # timeout=10 00:01:29.783 > git --version # 'git version 2.39.2' 00:01:29.810 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:29.812 [Pipeline] { 00:01:29.821 [Pipeline] retry 00:01:29.823 [Pipeline] { 00:01:29.840 [Pipeline] sh 00:01:30.371 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:31.777 [Pipeline] } 00:01:31.799 [Pipeline] // retry 00:01:31.805 [Pipeline] } 00:01:31.827 [Pipeline] // withCredentials 00:01:31.837 [Pipeline] httpRequest 00:01:31.857 [Pipeline] echo 00:01:31.858 Sorcerer 10.211.164.101 is alive 00:01:31.868 [Pipeline] httpRequest 00:01:31.874 HttpMethod: GET 00:01:31.875 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.876 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.880 Response Code: HTTP/1.1 200 OK 00:01:31.881 Success: Status code 200 is in the accepted range: 200,404 00:01:31.882 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.909 [Pipeline] sh 00:01:38.200 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:40.121 [Pipeline] sh 00:01:40.413 + git -C dpdk log --oneline -n5 00:01:40.413 caf0f5d395 version: 22.11.4 00:01:40.413 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.413 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.413 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.413 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.425 [Pipeline] } 00:01:40.446 [Pipeline] // stage 00:01:40.455 [Pipeline] stage 00:01:40.457 [Pipeline] { (Prepare) 00:01:40.482 [Pipeline] writeFile 00:01:40.499 [Pipeline] sh 00:01:40.789 + logger -p user.info -t JENKINS-CI 00:01:40.804 [Pipeline] sh 00:01:41.095 + logger -p user.info -t JENKINS-CI 00:01:41.110 [Pipeline] sh 00:01:41.400 + cat autorun-spdk.conf 00:01:41.400 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.400 SPDK_TEST_NVMF=1 00:01:41.400 SPDK_TEST_NVME_CLI=1 00:01:41.400 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.400 SPDK_TEST_NVMF_NICS=e810 00:01:41.400 SPDK_TEST_VFIOUSER=1 00:01:41.400 SPDK_RUN_UBSAN=1 00:01:41.400 NET_TYPE=phy 00:01:41.400 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.400 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.410 RUN_NIGHTLY=1 00:01:41.414 [Pipeline] readFile 00:01:41.466 [Pipeline] withEnv 00:01:41.468 [Pipeline] { 00:01:41.482 [Pipeline] sh 00:01:41.773 + set -ex 00:01:41.774 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:41.774 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.774 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.774 ++ SPDK_TEST_NVMF=1 00:01:41.774 ++ SPDK_TEST_NVME_CLI=1 00:01:41.774 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.774 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.774 ++ SPDK_TEST_VFIOUSER=1 00:01:41.774 ++ SPDK_RUN_UBSAN=1 00:01:41.774 ++ NET_TYPE=phy 00:01:41.774 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.774 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.774 ++ RUN_NIGHTLY=1 00:01:41.774 + case $SPDK_TEST_NVMF_NICS in 00:01:41.774 + DRIVERS=ice 00:01:41.774 + [[ tcp == \r\d\m\a ]] 00:01:41.774 + [[ -n ice ]] 00:01:41.774 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:41.774 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:45.078 rmmod: ERROR: Module irdma is not currently loaded 00:01:45.078 rmmod: ERROR: Module i40iw is not currently loaded 00:01:45.078 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:45.078 + true 00:01:45.078 + for D in $DRIVERS 00:01:45.078 + sudo modprobe ice 00:01:45.078 + exit 0 00:01:45.090 [Pipeline] } 00:01:45.108 [Pipeline] // withEnv 00:01:45.113 [Pipeline] } 00:01:45.130 [Pipeline] // stage 00:01:45.140 [Pipeline] catchError 00:01:45.142 [Pipeline] { 00:01:45.154 [Pipeline] timeout 00:01:45.154 Timeout set to expire in 50 min 00:01:45.155 [Pipeline] { 00:01:45.166 [Pipeline] stage 00:01:45.168 [Pipeline] { (Tests) 00:01:45.178 [Pipeline] sh 00:01:45.465 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.465 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.465 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.465 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:45.465 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.465 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:45.465 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:45.465 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:45.465 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:45.465 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:45.465 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:45.465 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.465 + source /etc/os-release 00:01:45.465 ++ NAME='Fedora Linux' 00:01:45.465 ++ VERSION='38 (Cloud Edition)' 00:01:45.465 ++ ID=fedora 00:01:45.465 ++ VERSION_ID=38 00:01:45.465 ++ VERSION_CODENAME= 00:01:45.465 ++ PLATFORM_ID=platform:f38 00:01:45.465 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:45.465 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:45.465 ++ LOGO=fedora-logo-icon 00:01:45.465 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:45.465 ++ HOME_URL=https://fedoraproject.org/ 00:01:45.465 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:45.465 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:45.465 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:45.465 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:45.465 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:45.465 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:45.465 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:45.465 ++ SUPPORT_END=2024-05-14 00:01:45.465 ++ VARIANT='Cloud Edition' 00:01:45.465 ++ VARIANT_ID=cloud 00:01:45.465 + uname -a 00:01:45.465 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:45.465 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:46.402 Hugepages 00:01:46.402 node hugesize free / total 00:01:46.402 node0 1048576kB 0 / 0 00:01:46.402 node0 2048kB 0 / 0 00:01:46.402 node1 1048576kB 0 / 0 00:01:46.402 node1 2048kB 0 / 0 00:01:46.402 00:01:46.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.661 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:46.661 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:46.661 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:46.661 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:46.661 + rm -f /tmp/spdk-ld-path 00:01:46.661 + source autorun-spdk.conf 00:01:46.661 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.661 ++ SPDK_TEST_NVMF=1 00:01:46.661 ++ SPDK_TEST_NVME_CLI=1 00:01:46.661 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.661 ++ SPDK_TEST_NVMF_NICS=e810 00:01:46.661 ++ SPDK_TEST_VFIOUSER=1 00:01:46.661 ++ SPDK_RUN_UBSAN=1 00:01:46.661 ++ NET_TYPE=phy 00:01:46.661 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.661 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:46.661 ++ RUN_NIGHTLY=1 00:01:46.661 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.661 + [[ -n '' ]] 00:01:46.661 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.661 + for M in /var/spdk/build-*-manifest.txt 00:01:46.661 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.661 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.661 + for M in /var/spdk/build-*-manifest.txt 00:01:46.661 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.661 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.661 ++ uname 00:01:46.661 + [[ Linux == \L\i\n\u\x ]] 00:01:46.661 + sudo dmesg -T 00:01:46.661 + sudo dmesg --clear 00:01:46.661 + dmesg_pid=9904 00:01:46.661 + [[ Fedora Linux == FreeBSD ]] 00:01:46.661 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.661 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.661 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.661 + sudo dmesg -Tw 00:01:46.661 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.661 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.661 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.661 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.661 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.661 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.661 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.661 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.661 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.661 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.661 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.661 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:46.661 Test configuration: 00:01:46.661 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.661 SPDK_TEST_NVMF=1 00:01:46.661 SPDK_TEST_NVME_CLI=1 00:01:46.661 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.661 SPDK_TEST_NVMF_NICS=e810 00:01:46.661 SPDK_TEST_VFIOUSER=1 00:01:46.661 SPDK_RUN_UBSAN=1 00:01:46.661 NET_TYPE=phy 00:01:46.661 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.661 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:46.661 RUN_NIGHTLY=1 10:48:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:46.661 10:48:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.661 10:48:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.662 10:48:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.662 10:48:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.662 10:48:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.662 10:48:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.662 10:48:01 -- paths/export.sh@5 -- $ export PATH 00:01:46.662 10:48:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.662 10:48:01 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:46.662 10:48:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:46.662 10:48:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720687681.XXXXXX 00:01:46.662 10:48:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720687681.sR1LyZ 00:01:46.662 10:48:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:46.662 10:48:01 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:01:46.662 10:48:01 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:46.662 10:48:01 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:46.662 10:48:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:46.662 10:48:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.662 10:48:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:46.662 10:48:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:46.662 10:48:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.662 10:48:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:46.662 10:48:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:46.662 10:48:01 -- pm/common@17 -- $ local monitor 00:01:46.662 10:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.662 10:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.662 10:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.662 10:48:01 -- pm/common@21 -- $ date +%s 00:01:46.662 10:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.662 10:48:01 -- pm/common@21 -- $ date +%s 00:01:46.662 10:48:01 -- pm/common@25 -- $ sleep 1 00:01:46.662 10:48:01 -- pm/common@21 -- $ date +%s 00:01:46.963 10:48:01 -- pm/common@21 -- $ date +%s 00:01:46.963 10:48:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720687681 00:01:46.963 10:48:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720687681 00:01:46.963 10:48:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720687681 00:01:46.963 10:48:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720687681 00:01:46.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720687681_collect-vmstat.pm.log 00:01:46.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720687681_collect-cpu-load.pm.log 00:01:46.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720687681_collect-cpu-temp.pm.log 00:01:46.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720687681_collect-bmc-pm.bmc.pm.log 00:01:47.899 10:48:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:47.899 10:48:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.899 10:48:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.899 10:48:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.899 10:48:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.899 Thu Jul 11 08:48:02 AM UTC 2024 00:01:47.899 10:48:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.899 v24.09-pre-201-ge64f085ad 00:01:47.899 10:48:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:47.899 10:48:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.899 10:48:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.899 10:48:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:47.899 10:48:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.899 10:48:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.899 ************************************ 00:01:47.899 START TEST ubsan 00:01:47.899 ************************************ 00:01:47.899 10:48:02 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:47.899 using ubsan 00:01:47.899 00:01:47.899 real 0m0.000s 00:01:47.899 user 0m0.000s 00:01:47.899 sys 0m0.000s 00:01:47.899 10:48:02 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:47.899 10:48:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.899 ************************************ 00:01:47.899 END TEST ubsan 00:01:47.899 ************************************ 00:01:47.899 10:48:02 -- common/autotest_common.sh@1142 -- $ return 0 00:01:47.899 10:48:02 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:47.899 10:48:02 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:47.899 10:48:02 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:47.899 10:48:02 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:47.899 10:48:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.899 10:48:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.899 ************************************ 00:01:47.899 START TEST build_native_dpdk 00:01:47.899 ************************************ 00:01:47.899 10:48:02 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:47.899 10:48:02 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:47.900 caf0f5d395 version: 22.11.4 00:01:47.900 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:47.900 dc9c799c7d vhost: fix missing spinlock unlock 00:01:47.900 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:47.900 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:47.900 10:48:02 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:47.900 patching file config/rte_config.h 00:01:47.900 Hunk #1 succeeded at 60 (offset 1 line). 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:47.900 10:48:02 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:54.487 The Meson build system 00:01:54.487 Version: 1.3.1 00:01:54.487 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:54.487 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:54.487 Build type: native build 00:01:54.487 Program cat found: YES (/usr/bin/cat) 00:01:54.487 Project name: DPDK 00:01:54.487 Project version: 22.11.4 00:01:54.487 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.487 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:54.487 Host machine cpu family: x86_64 00:01:54.487 Host machine cpu: x86_64 00:01:54.487 Message: ## Building in Developer Mode ## 00:01:54.487 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.487 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:54.487 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.487 Program objdump found: YES (/usr/bin/objdump) 00:01:54.487 Program python3 found: YES (/usr/bin/python3) 00:01:54.487 Program cat found: YES (/usr/bin/cat) 00:01:54.487 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:54.487 Checking for size of "void *" : 8 00:01:54.487 Checking for size of "void *" : 8 (cached) 00:01:54.487 Library m found: YES 00:01:54.487 Library numa found: YES 00:01:54.487 Has header "numaif.h" : YES 00:01:54.487 Library fdt found: NO 00:01:54.487 Library execinfo found: NO 00:01:54.487 Has header "execinfo.h" : YES 00:01:54.487 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.487 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.487 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.487 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.487 Run-time dependency openssl found: YES 3.0.9 00:01:54.487 Run-time dependency libpcap found: YES 1.10.4 00:01:54.487 Has header "pcap.h" with dependency libpcap: YES 00:01:54.487 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.487 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.487 Compiler for C supports arguments -Wformat: YES 00:01:54.487 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.487 Compiler for C supports arguments -Wformat-security: NO 00:01:54.487 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.487 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.487 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.487 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.487 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.487 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.487 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.487 Compiler for C supports arguments -Wundef: YES 00:01:54.487 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.487 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.487 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.487 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.487 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.487 Compiler for C supports arguments -mavx512f: YES 00:01:54.487 Checking if "AVX512 checking" compiles: YES 00:01:54.487 Fetching value of define "__SSE4_2__" : 1 00:01:54.487 Fetching value of define "__AES__" : 1 00:01:54.487 Fetching value of define "__AVX__" : 1 00:01:54.487 Fetching value of define "__AVX2__" : (undefined) 00:01:54.487 Fetching value of define "__AVX512BW__" : (undefined) 00:01:54.487 Fetching value of define "__AVX512CD__" : (undefined) 00:01:54.487 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:54.487 Fetching value of define "__AVX512F__" : (undefined) 00:01:54.487 Fetching value of define "__AVX512VL__" : (undefined) 00:01:54.487 Fetching value of define "__PCLMUL__" : 1 00:01:54.487 Fetching value of define "__RDRND__" : 1 00:01:54.487 Fetching value of define "__RDSEED__" : (undefined) 00:01:54.487 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.487 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.487 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.487 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.487 Checking for function "getentropy" : YES 00:01:54.487 Message: lib/eal: Defining dependency "eal" 00:01:54.487 Message: lib/ring: Defining dependency "ring" 00:01:54.487 Message: lib/rcu: Defining dependency "rcu" 00:01:54.487 Message: lib/mempool: Defining dependency "mempool" 00:01:54.487 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.487 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.487 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.487 Compiler for C supports arguments -mpclmul: YES 00:01:54.487 Compiler for C supports arguments -maes: YES 00:01:54.487 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.487 Compiler for C supports arguments -mavx512bw: YES 00:01:54.487 Compiler for C supports arguments -mavx512dq: YES 00:01:54.487 Compiler for C supports arguments -mavx512vl: YES 00:01:54.487 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.487 Compiler for C supports arguments -mavx2: YES 00:01:54.487 Compiler for C supports arguments -mavx: YES 00:01:54.487 Message: lib/net: Defining dependency "net" 00:01:54.487 Message: lib/meter: Defining dependency "meter" 00:01:54.487 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.487 Message: lib/pci: Defining dependency "pci" 00:01:54.487 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.487 Message: lib/metrics: Defining dependency "metrics" 00:01:54.487 Message: lib/hash: Defining dependency "hash" 00:01:54.487 Message: lib/timer: Defining dependency "timer" 00:01:54.487 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:54.487 Compiler for C supports arguments -mavx2: YES (cached) 00:01:54.487 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.487 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:54.487 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:54.487 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:54.487 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:54.487 Message: lib/acl: Defining dependency "acl" 00:01:54.487 Message: lib/bbdev: Defining dependency "bbdev" 00:01:54.487 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:54.487 Run-time dependency libelf found: YES 0.190 00:01:54.487 Message: lib/bpf: Defining dependency "bpf" 00:01:54.487 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:54.487 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.487 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.487 Message: lib/distributor: Defining dependency "distributor" 00:01:54.487 Message: lib/efd: Defining dependency "efd" 00:01:54.487 Message: lib/eventdev: Defining dependency "eventdev" 00:01:54.487 Message: lib/gpudev: Defining dependency "gpudev" 00:01:54.487 Message: lib/gro: Defining dependency "gro" 00:01:54.487 Message: lib/gso: Defining dependency "gso" 00:01:54.487 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:54.487 Message: lib/jobstats: Defining dependency "jobstats" 00:01:54.487 Message: lib/latencystats: Defining dependency "latencystats" 00:01:54.487 Message: lib/lpm: Defining dependency "lpm" 00:01:54.487 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.487 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:54.487 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:54.487 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:54.487 Message: lib/member: Defining dependency "member" 00:01:54.487 Message: lib/pcapng: Defining dependency "pcapng" 00:01:54.487 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.487 Message: lib/power: Defining dependency "power" 00:01:54.487 Message: lib/rawdev: Defining dependency "rawdev" 00:01:54.487 Message: lib/regexdev: Defining dependency "regexdev" 00:01:54.487 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.487 Message: lib/rib: Defining dependency "rib" 00:01:54.487 Message: lib/reorder: Defining dependency "reorder" 00:01:54.487 Message: lib/sched: Defining dependency "sched" 00:01:54.487 Message: lib/security: Defining dependency "security" 00:01:54.487 Message: lib/stack: Defining dependency "stack" 00:01:54.487 Has header "linux/userfaultfd.h" : YES 00:01:54.487 Message: lib/vhost: Defining dependency "vhost" 00:01:54.487 Message: lib/ipsec: Defining dependency "ipsec" 00:01:54.487 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.487 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:54.487 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:54.487 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:54.487 Message: lib/fib: Defining dependency "fib" 00:01:54.487 Message: lib/port: Defining dependency "port" 00:01:54.487 Message: lib/pdump: Defining dependency "pdump" 00:01:54.487 Message: lib/table: Defining dependency "table" 00:01:54.487 Message: lib/pipeline: Defining dependency "pipeline" 00:01:54.487 Message: lib/graph: Defining dependency "graph" 00:01:54.487 Message: lib/node: Defining dependency "node" 00:01:54.487 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.487 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.487 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.487 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.487 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:54.488 Compiler for C supports arguments -Wno-unused-value: YES 00:01:55.056 Compiler for C supports arguments -Wno-format: YES 00:01:55.056 Compiler for C supports arguments -Wno-format-security: YES 00:01:55.056 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:55.056 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:55.056 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:55.056 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:55.056 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:55.056 Compiler for C supports arguments -mavx2: YES (cached) 00:01:55.056 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:55.056 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.056 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:55.056 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:55.056 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:55.056 Program doxygen found: YES (/usr/bin/doxygen) 00:01:55.056 Configuring doxy-api.conf using configuration 00:01:55.056 Program sphinx-build found: NO 00:01:55.056 Configuring rte_build_config.h using configuration 00:01:55.056 Message: 00:01:55.056 ================= 00:01:55.056 Applications Enabled 00:01:55.056 ================= 00:01:55.056 00:01:55.056 apps: 00:01:55.056 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:55.056 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:55.056 test-security-perf, 00:01:55.056 00:01:55.056 Message: 00:01:55.056 ================= 00:01:55.056 Libraries Enabled 00:01:55.056 ================= 00:01:55.056 00:01:55.056 libs: 00:01:55.056 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:55.056 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:55.056 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:55.056 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:55.056 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:55.056 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:55.056 table, pipeline, graph, node, 00:01:55.056 00:01:55.056 Message: 00:01:55.056 =============== 00:01:55.056 Drivers Enabled 00:01:55.056 =============== 00:01:55.056 00:01:55.056 common: 00:01:55.056 00:01:55.056 bus: 00:01:55.056 pci, vdev, 00:01:55.056 mempool: 00:01:55.056 ring, 00:01:55.056 dma: 00:01:55.056 00:01:55.056 net: 00:01:55.056 i40e, 00:01:55.056 raw: 00:01:55.056 00:01:55.056 crypto: 00:01:55.056 00:01:55.056 compress: 00:01:55.056 00:01:55.056 regex: 00:01:55.056 00:01:55.056 vdpa: 00:01:55.056 00:01:55.056 event: 00:01:55.056 00:01:55.056 baseband: 00:01:55.056 00:01:55.056 gpu: 00:01:55.056 00:01:55.056 00:01:55.056 Message: 00:01:55.056 ================= 00:01:55.056 Content Skipped 00:01:55.056 ================= 00:01:55.056 00:01:55.056 apps: 00:01:55.056 00:01:55.056 libs: 00:01:55.056 kni: explicitly disabled via build config (deprecated lib) 00:01:55.056 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:55.056 00:01:55.056 drivers: 00:01:55.056 common/cpt: not in enabled drivers build config 00:01:55.056 common/dpaax: not in enabled drivers build config 00:01:55.056 common/iavf: not in enabled drivers build config 00:01:55.056 common/idpf: not in enabled drivers build config 00:01:55.056 common/mvep: not in enabled drivers build config 00:01:55.056 common/octeontx: not in enabled drivers build config 00:01:55.056 bus/auxiliary: not in enabled drivers build config 00:01:55.056 bus/dpaa: not in enabled drivers build config 00:01:55.056 bus/fslmc: not in enabled drivers build config 00:01:55.056 bus/ifpga: not in enabled drivers build config 00:01:55.056 bus/vmbus: not in enabled drivers build config 00:01:55.056 common/cnxk: not in enabled drivers build config 00:01:55.056 common/mlx5: not in enabled drivers build config 00:01:55.056 common/qat: not in enabled drivers build config 00:01:55.056 common/sfc_efx: not in enabled drivers build config 00:01:55.056 mempool/bucket: not in enabled drivers build config 00:01:55.056 mempool/cnxk: not in enabled drivers build config 00:01:55.056 mempool/dpaa: not in enabled drivers build config 00:01:55.056 mempool/dpaa2: not in enabled drivers build config 00:01:55.056 mempool/octeontx: not in enabled drivers build config 00:01:55.056 mempool/stack: not in enabled drivers build config 00:01:55.056 dma/cnxk: not in enabled drivers build config 00:01:55.056 dma/dpaa: not in enabled drivers build config 00:01:55.056 dma/dpaa2: not in enabled drivers build config 00:01:55.056 dma/hisilicon: not in enabled drivers build config 00:01:55.056 dma/idxd: not in enabled drivers build config 00:01:55.056 dma/ioat: not in enabled drivers build config 00:01:55.056 dma/skeleton: not in enabled drivers build config 00:01:55.056 net/af_packet: not in enabled drivers build config 00:01:55.056 net/af_xdp: not in enabled drivers build config 00:01:55.056 net/ark: not in enabled drivers build config 00:01:55.056 net/atlantic: not in enabled drivers build config 00:01:55.056 net/avp: not in enabled drivers build config 00:01:55.056 net/axgbe: not in enabled drivers build config 00:01:55.056 net/bnx2x: not in enabled drivers build config 00:01:55.056 net/bnxt: not in enabled drivers build config 00:01:55.056 net/bonding: not in enabled drivers build config 00:01:55.056 net/cnxk: not in enabled drivers build config 00:01:55.056 net/cxgbe: not in enabled drivers build config 00:01:55.056 net/dpaa: not in enabled drivers build config 00:01:55.056 net/dpaa2: not in enabled drivers build config 00:01:55.056 net/e1000: not in enabled drivers build config 00:01:55.056 net/ena: not in enabled drivers build config 00:01:55.056 net/enetc: not in enabled drivers build config 00:01:55.056 net/enetfec: not in enabled drivers build config 00:01:55.056 net/enic: not in enabled drivers build config 00:01:55.056 net/failsafe: not in enabled drivers build config 00:01:55.056 net/fm10k: not in enabled drivers build config 00:01:55.056 net/gve: not in enabled drivers build config 00:01:55.056 net/hinic: not in enabled drivers build config 00:01:55.056 net/hns3: not in enabled drivers build config 00:01:55.056 net/iavf: not in enabled drivers build config 00:01:55.057 net/ice: not in enabled drivers build config 00:01:55.057 net/idpf: not in enabled drivers build config 00:01:55.057 net/igc: not in enabled drivers build config 00:01:55.057 net/ionic: not in enabled drivers build config 00:01:55.057 net/ipn3ke: not in enabled drivers build config 00:01:55.057 net/ixgbe: not in enabled drivers build config 00:01:55.057 net/kni: not in enabled drivers build config 00:01:55.057 net/liquidio: not in enabled drivers build config 00:01:55.057 net/mana: not in enabled drivers build config 00:01:55.057 net/memif: not in enabled drivers build config 00:01:55.057 net/mlx4: not in enabled drivers build config 00:01:55.057 net/mlx5: not in enabled drivers build config 00:01:55.057 net/mvneta: not in enabled drivers build config 00:01:55.057 net/mvpp2: not in enabled drivers build config 00:01:55.057 net/netvsc: not in enabled drivers build config 00:01:55.057 net/nfb: not in enabled drivers build config 00:01:55.057 net/nfp: not in enabled drivers build config 00:01:55.057 net/ngbe: not in enabled drivers build config 00:01:55.057 net/null: not in enabled drivers build config 00:01:55.057 net/octeontx: not in enabled drivers build config 00:01:55.057 net/octeon_ep: not in enabled drivers build config 00:01:55.057 net/pcap: not in enabled drivers build config 00:01:55.057 net/pfe: not in enabled drivers build config 00:01:55.057 net/qede: not in enabled drivers build config 00:01:55.057 net/ring: not in enabled drivers build config 00:01:55.057 net/sfc: not in enabled drivers build config 00:01:55.057 net/softnic: not in enabled drivers build config 00:01:55.057 net/tap: not in enabled drivers build config 00:01:55.057 net/thunderx: not in enabled drivers build config 00:01:55.057 net/txgbe: not in enabled drivers build config 00:01:55.057 net/vdev_netvsc: not in enabled drivers build config 00:01:55.057 net/vhost: not in enabled drivers build config 00:01:55.057 net/virtio: not in enabled drivers build config 00:01:55.057 net/vmxnet3: not in enabled drivers build config 00:01:55.057 raw/cnxk_bphy: not in enabled drivers build config 00:01:55.057 raw/cnxk_gpio: not in enabled drivers build config 00:01:55.057 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:55.057 raw/ifpga: not in enabled drivers build config 00:01:55.057 raw/ntb: not in enabled drivers build config 00:01:55.057 raw/skeleton: not in enabled drivers build config 00:01:55.057 crypto/armv8: not in enabled drivers build config 00:01:55.057 crypto/bcmfs: not in enabled drivers build config 00:01:55.057 crypto/caam_jr: not in enabled drivers build config 00:01:55.057 crypto/ccp: not in enabled drivers build config 00:01:55.057 crypto/cnxk: not in enabled drivers build config 00:01:55.057 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.057 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.057 crypto/ipsec_mb: not in enabled drivers build config 00:01:55.057 crypto/mlx5: not in enabled drivers build config 00:01:55.057 crypto/mvsam: not in enabled drivers build config 00:01:55.057 crypto/nitrox: not in enabled drivers build config 00:01:55.057 crypto/null: not in enabled drivers build config 00:01:55.057 crypto/octeontx: not in enabled drivers build config 00:01:55.057 crypto/openssl: not in enabled drivers build config 00:01:55.057 crypto/scheduler: not in enabled drivers build config 00:01:55.057 crypto/uadk: not in enabled drivers build config 00:01:55.057 crypto/virtio: not in enabled drivers build config 00:01:55.057 compress/isal: not in enabled drivers build config 00:01:55.057 compress/mlx5: not in enabled drivers build config 00:01:55.057 compress/octeontx: not in enabled drivers build config 00:01:55.057 compress/zlib: not in enabled drivers build config 00:01:55.057 regex/mlx5: not in enabled drivers build config 00:01:55.057 regex/cn9k: not in enabled drivers build config 00:01:55.057 vdpa/ifc: not in enabled drivers build config 00:01:55.057 vdpa/mlx5: not in enabled drivers build config 00:01:55.057 vdpa/sfc: not in enabled drivers build config 00:01:55.057 event/cnxk: not in enabled drivers build config 00:01:55.057 event/dlb2: not in enabled drivers build config 00:01:55.057 event/dpaa: not in enabled drivers build config 00:01:55.057 event/dpaa2: not in enabled drivers build config 00:01:55.057 event/dsw: not in enabled drivers build config 00:01:55.057 event/opdl: not in enabled drivers build config 00:01:55.057 event/skeleton: not in enabled drivers build config 00:01:55.057 event/sw: not in enabled drivers build config 00:01:55.057 event/octeontx: not in enabled drivers build config 00:01:55.057 baseband/acc: not in enabled drivers build config 00:01:55.057 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:55.057 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:55.057 baseband/la12xx: not in enabled drivers build config 00:01:55.057 baseband/null: not in enabled drivers build config 00:01:55.057 baseband/turbo_sw: not in enabled drivers build config 00:01:55.057 gpu/cuda: not in enabled drivers build config 00:01:55.057 00:01:55.057 00:01:55.057 Build targets in project: 316 00:01:55.057 00:01:55.057 DPDK 22.11.4 00:01:55.057 00:01:55.057 User defined options 00:01:55.057 libdir : lib 00:01:55.057 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:55.057 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:55.057 c_link_args : 00:01:55.057 enable_docs : false 00:01:55.057 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:55.057 enable_kmods : false 00:01:55.057 machine : native 00:01:55.057 tests : false 00:01:55.057 00:01:55.057 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:55.057 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:55.057 10:48:09 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:55.057 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:55.057 [1/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:55.057 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:55.057 [3/745] Generating lib/rte_telemetry_def with a custom command 00:01:55.057 [4/745] Generating lib/rte_kvargs_def with a custom command 00:01:55.319 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.319 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.319 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.319 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.319 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.319 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.319 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.319 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.319 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.319 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.319 [15/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.319 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.319 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.319 [18/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:55.319 [19/745] Linking static target lib/librte_kvargs.a 00:01:55.319 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.319 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.320 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.320 [23/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.320 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.320 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.320 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.320 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.320 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.320 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.320 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.320 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.320 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.320 [33/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:55.587 [34/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:55.587 [35/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.587 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.587 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.587 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.587 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:55.587 [40/745] Generating lib/rte_eal_def with a custom command 00:01:55.587 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.587 [42/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.587 [43/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.587 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.587 [45/745] Generating lib/rte_eal_mingw with a custom command 00:01:55.587 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.587 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.587 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.587 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.587 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.587 [51/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.587 [52/745] Generating lib/rte_ring_def with a custom command 00:01:55.587 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.587 [54/745] Generating lib/rte_ring_mingw with a custom command 00:01:55.587 [55/745] Generating lib/rte_rcu_def with a custom command 00:01:55.587 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:55.587 [57/745] Generating lib/rte_rcu_mingw with a custom command 00:01:55.587 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.587 [59/745] Generating lib/rte_mempool_def with a custom command 00:01:55.587 [60/745] Generating lib/rte_mempool_mingw with a custom command 00:01:55.587 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.587 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:55.587 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:55.587 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.587 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.587 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.587 [67/745] Generating lib/rte_net_def with a custom command 00:01:55.588 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.588 [69/745] Generating lib/rte_net_mingw with a custom command 00:01:55.588 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.588 [71/745] Generating lib/rte_meter_def with a custom command 00:01:55.588 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:55.588 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.588 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.588 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.588 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.852 [77/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.852 [78/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.853 [79/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.853 [80/745] Linking static target lib/librte_ring.a 00:01:55.853 [81/745] Linking target lib/librte_kvargs.so.23.0 00:01:55.853 [82/745] Generating lib/rte_ethdev_def with a custom command 00:01:55.853 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.853 [84/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.853 [85/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.853 [86/745] Linking static target lib/librte_meter.a 00:01:55.853 [87/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:55.853 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.853 [89/745] Generating lib/rte_pci_mingw with a custom command 00:01:55.853 [90/745] Generating lib/rte_pci_def with a custom command 00:01:55.853 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.116 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.116 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.116 [94/745] Linking static target lib/librte_pci.a 00:01:56.116 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.116 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.116 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.116 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.383 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.383 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.383 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.383 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.383 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.383 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.383 [105/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.383 [106/745] Generating lib/rte_cmdline_def with a custom command 00:01:56.383 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.383 [108/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.383 [109/745] Linking static target lib/librte_telemetry.a 00:01:56.383 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.383 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.383 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:56.383 [113/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.383 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.383 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:56.383 [116/745] Generating lib/rte_metrics_def with a custom command 00:01:56.383 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.383 [118/745] Generating lib/rte_hash_def with a custom command 00:01:56.383 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:56.383 [120/745] Generating lib/rte_timer_def with a custom command 00:01:56.383 [121/745] Generating lib/rte_timer_mingw with a custom command 00:01:56.644 [122/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.644 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:56.644 [124/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.644 [125/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.644 [126/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.644 [127/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.644 [128/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.644 [129/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.644 [130/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.644 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.644 [132/745] Generating lib/rte_acl_def with a custom command 00:01:56.918 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:56.918 [134/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.918 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:56.918 [136/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:56.918 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:56.918 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:56.918 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:56.918 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.918 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.918 [142/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.918 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.918 [144/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:56.918 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.918 [146/745] Linking target lib/librte_telemetry.so.23.0 00:01:56.918 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.918 [148/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.918 [149/745] Generating lib/rte_bpf_def with a custom command 00:01:56.918 [150/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:56.918 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:57.187 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:57.187 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.187 [154/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.187 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:57.187 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:57.187 [157/745] Generating lib/rte_compressdev_def with a custom command 00:01:57.188 [158/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:57.188 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.188 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:57.188 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.188 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.188 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.188 [164/745] Generating lib/rte_cryptodev_def with a custom command 00:01:57.188 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:57.188 [166/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.188 [167/745] Linking static target lib/librte_rcu.a 00:01:57.188 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.188 [169/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.188 [170/745] Linking static target lib/librte_cmdline.a 00:01:57.188 [171/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:57.188 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:57.188 [173/745] Linking static target lib/librte_net.a 00:01:57.188 [174/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.455 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:57.455 [176/745] Linking static target lib/librte_timer.a 00:01:57.455 [177/745] Generating lib/rte_efd_def with a custom command 00:01:57.455 [178/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.455 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:57.455 [180/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:57.455 [181/745] Linking static target lib/librte_cfgfile.a 00:01:57.455 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.717 [183/745] Linking static target lib/librte_mempool.a 00:01:57.717 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:57.717 [185/745] Linking static target lib/librte_metrics.a 00:01:57.717 [186/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.717 [187/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.717 [188/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.717 [189/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.979 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:57.980 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.980 [192/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:57.980 [193/745] Linking static target lib/librte_eal.a 00:01:57.980 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:57.980 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:57.980 [196/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.980 [197/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:57.980 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:57.980 [199/745] Linking static target lib/librte_bitratestats.a 00:01:57.980 [200/745] Generating lib/rte_eventdev_def with a custom command 00:01:57.980 [201/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:57.980 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:57.980 [203/745] Generating lib/rte_gpudev_def with a custom command 00:01:58.246 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:58.246 [205/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.246 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:58.246 [207/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:58.246 [208/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.246 [209/745] Generating lib/rte_gro_def with a custom command 00:01:58.246 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:58.516 [211/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.516 [212/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:58.516 [213/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.516 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:58.516 [215/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:58.516 [216/745] Generating lib/rte_gso_def with a custom command 00:01:58.516 [217/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:58.516 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:58.516 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:58.516 [220/745] Generating lib/rte_gso_mingw with a custom command 00:01:58.516 [221/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.516 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.779 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:58.779 [224/745] Linking static target lib/librte_bbdev.a 00:01:58.779 [225/745] Generating lib/rte_ip_frag_def with a custom command 00:01:58.779 [226/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:58.779 [227/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:58.779 [228/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.779 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.779 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:58.779 [231/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:58.779 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:58.779 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:58.779 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:58.779 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:58.779 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:58.779 [237/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.779 [238/745] Linking static target lib/librte_compressdev.a 00:01:59.047 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:59.047 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:59.047 [241/745] Linking static target lib/librte_jobstats.a 00:01:59.047 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:59.047 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:59.313 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:59.313 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:59.313 [246/745] Linking static target lib/librte_distributor.a 00:01:59.313 [247/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:59.313 [248/745] Generating lib/rte_member_def with a custom command 00:01:59.313 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:59.578 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.578 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:59.578 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:59.578 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:59.578 [254/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.578 [255/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:59.578 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:59.578 [257/745] Linking static target lib/librte_bpf.a 00:01:59.578 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:59.578 [259/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:59.578 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:59.578 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:59.578 [262/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:59.842 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:59.843 [264/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.843 [265/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:59.843 [266/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:59.843 [267/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:59.843 [268/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:59.843 [269/745] Linking static target lib/librte_gpudev.a 00:01:59.843 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:59.843 [271/745] Generating lib/rte_power_mingw with a custom command 00:01:59.843 [272/745] Generating lib/rte_power_def with a custom command 00:01:59.843 [273/745] Linking static target lib/librte_gro.a 00:01:59.843 [274/745] Generating lib/rte_rawdev_def with a custom command 00:01:59.843 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:59.843 [276/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:59.843 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:59.843 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:59.843 [279/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:59.843 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:59.843 [281/745] Generating lib/rte_dmadev_def with a custom command 00:02:00.112 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:00.112 [283/745] Generating lib/rte_rib_def with a custom command 00:02:00.112 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:00.112 [285/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:00.112 [286/745] Generating lib/rte_reorder_def with a custom command 00:02:00.112 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:02:00.112 [288/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.112 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:00.375 [290/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:00.375 [291/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:00.375 [292/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:00.375 [293/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.375 [294/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:00.375 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:00.375 [296/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:00.375 [297/745] Generating lib/rte_sched_mingw with a custom command 00:02:00.375 [298/745] Generating lib/rte_security_def with a custom command 00:02:00.375 [299/745] Generating lib/rte_sched_def with a custom command 00:02:00.375 [300/745] Generating lib/rte_security_mingw with a custom command 00:02:00.375 [301/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:00.375 [302/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:00.375 [303/745] Linking static target lib/librte_latencystats.a 00:02:00.375 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:00.375 [305/745] Generating lib/rte_stack_def with a custom command 00:02:00.375 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:00.375 [307/745] Generating lib/rte_stack_mingw with a custom command 00:02:00.375 [308/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:00.375 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:00.375 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:00.375 [311/745] Linking static target lib/librte_rawdev.a 00:02:00.375 [312/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.375 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:00.375 [314/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:00.640 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:00.640 [316/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:00.640 [317/745] Linking static target lib/librte_stack.a 00:02:00.640 [318/745] Generating lib/rte_vhost_def with a custom command 00:02:00.640 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:02:00.640 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.640 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:00.640 [322/745] Linking static target lib/librte_dmadev.a 00:02:00.640 [323/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:00.640 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.640 [325/745] Linking static target lib/librte_ip_frag.a 00:02:00.903 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:00.903 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:00.903 [328/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.904 [329/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:00.904 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:00.904 [331/745] Generating lib/rte_ipsec_def with a custom command 00:02:00.904 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:01.167 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:01.167 [334/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.167 [335/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.167 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:01.167 [337/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.428 [338/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.428 [339/745] Generating lib/rte_fib_def with a custom command 00:02:01.428 [340/745] Generating lib/rte_fib_mingw with a custom command 00:02:01.428 [341/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.428 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:01.428 [343/745] Linking static target lib/librte_regexdev.a 00:02:01.428 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:01.428 [345/745] Linking static target lib/librte_gso.a 00:02:01.428 [346/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:01.696 [347/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:01.696 [348/745] Linking static target lib/librte_efd.a 00:02:01.696 [349/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.696 [350/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:01.696 [351/745] Linking static target lib/librte_pcapng.a 00:02:01.696 [352/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.696 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:01.696 [354/745] Linking static target lib/librte_lpm.a 00:02:01.958 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:01.958 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.958 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:01.958 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.958 [359/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.958 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.958 [361/745] Linking static target lib/librte_reorder.a 00:02:01.958 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.223 [363/745] Generating lib/rte_port_def with a custom command 00:02:02.223 [364/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:02.223 [365/745] Linking static target lib/acl/libavx2_tmp.a 00:02:02.223 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.223 [367/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:02.223 [368/745] Generating lib/rte_port_mingw with a custom command 00:02:02.223 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:02.223 [370/745] Generating lib/rte_pdump_def with a custom command 00:02:02.223 [371/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:02.223 [372/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.223 [373/745] Generating lib/rte_pdump_mingw with a custom command 00:02:02.224 [374/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:02.224 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:02.224 [376/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.224 [377/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:02.224 [378/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:02.224 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.224 [380/745] Linking static target lib/librte_security.a 00:02:02.495 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.495 [382/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.495 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:02.495 [384/745] Linking static target lib/librte_power.a 00:02:02.495 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.495 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.495 [387/745] Linking static target lib/librte_hash.a 00:02:02.495 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:02.755 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.755 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:02.755 [391/745] Linking static target lib/librte_rib.a 00:02:02.755 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:02.755 [393/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:02.755 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:02.755 [395/745] Linking static target lib/acl/libavx512_tmp.a 00:02:02.755 [396/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.017 [397/745] Linking static target lib/librte_acl.a 00:02:03.017 [398/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:03.017 [399/745] Linking static target lib/librte_ethdev.a 00:02:03.017 [400/745] Generating lib/rte_table_def with a custom command 00:02:03.017 [401/745] Generating lib/rte_table_mingw with a custom command 00:02:03.017 [402/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.289 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.289 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.560 [405/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:03.560 [406/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:03.560 [407/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:03.560 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:03.560 [409/745] Generating lib/rte_pipeline_def with a custom command 00:02:03.560 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:03.560 [411/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:03.560 [412/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:03.560 [413/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:03.560 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:03.560 [415/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:03.560 [416/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:03.560 [417/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.560 [418/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:03.560 [419/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:03.560 [420/745] Linking static target lib/librte_fib.a 00:02:03.560 [421/745] Generating lib/rte_graph_mingw with a custom command 00:02:03.560 [422/745] Generating lib/rte_graph_def with a custom command 00:02:03.824 [423/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.824 [424/745] Linking static target lib/librte_mbuf.a 00:02:03.824 [425/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:03.824 [426/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:03.824 [427/745] Linking static target lib/librte_member.a 00:02:04.088 [428/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.088 [429/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:04.088 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:04.088 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:04.088 [432/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:04.088 [433/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:04.088 [434/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:04.088 [435/745] Generating lib/rte_node_def with a custom command 00:02:04.088 [436/745] Generating lib/rte_node_mingw with a custom command 00:02:04.088 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:04.088 [438/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:04.088 [439/745] Linking static target lib/librte_eventdev.a 00:02:04.351 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.351 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:04.351 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:04.351 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:04.351 [444/745] Linking static target lib/librte_sched.a 00:02:04.351 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.351 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:04.351 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.351 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:04.351 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:04.619 [450/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:04.619 [451/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:04.619 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:04.619 [453/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:04.619 [454/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:04.619 [455/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:04.619 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:04.619 [457/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:04.619 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.619 [459/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:04.619 [460/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.619 [461/745] Linking static target lib/librte_cryptodev.a 00:02:04.619 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.619 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.888 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:04.888 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:04.888 [466/745] Linking static target lib/librte_pdump.a 00:02:04.888 [467/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:04.888 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:04.888 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:04.888 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.888 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.888 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:04.888 [473/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:04.888 [474/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:04.888 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.152 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.152 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:05.152 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:05.152 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:05.152 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:05.152 [481/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.152 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:05.152 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:05.416 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:05.416 [485/745] Linking static target lib/librte_table.a 00:02:05.416 [486/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:05.416 [487/745] Linking static target lib/librte_ipsec.a 00:02:05.416 [488/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:05.416 [489/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.416 [490/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.416 [491/745] Linking static target drivers/librte_bus_vdev.a 00:02:05.680 [492/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.680 [493/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.680 [494/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:05.680 [495/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.680 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:05.680 [497/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:05.947 [498/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.947 [499/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.947 [500/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:05.947 [501/745] Linking static target lib/librte_graph.a 00:02:05.947 [502/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:05.947 [503/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:05.947 [504/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:05.947 [505/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:05.947 [506/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.947 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:06.214 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.214 [509/745] Linking static target drivers/librte_bus_pci.a 00:02:06.214 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:06.214 [511/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.214 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:06.478 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:06.478 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.744 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:06.744 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.744 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:06.744 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:06.744 [519/745] Linking static target lib/librte_port.a 00:02:07.010 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:07.010 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:07.010 [522/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:07.277 [523/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:07.277 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:07.277 [525/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.277 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.549 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:07.549 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.549 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:07.549 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:07.549 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:07.549 [532/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:07.549 [533/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.549 [534/745] Linking static target drivers/librte_mempool_ring.a 00:02:07.549 [535/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.549 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:07.811 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:07.811 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.811 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:07.811 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:08.075 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.075 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:08.347 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:08.347 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:08.347 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:08.347 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:08.611 [547/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:08.611 [548/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:08.611 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:08.611 [550/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:08.611 [551/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:08.877 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:08.877 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:09.142 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:09.142 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:09.414 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:09.414 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:09.414 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:09.681 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:09.681 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:09.681 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:09.681 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:09.957 [563/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:09.957 [564/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:09.957 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:09.957 [566/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:09.957 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:09.957 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:09.957 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:09.957 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:10.222 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:10.222 [572/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:10.222 [573/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:10.222 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:10.484 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:10.484 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:10.484 [577/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.750 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:10.750 [579/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:10.750 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:10.750 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:10.750 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:10.750 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:10.750 [584/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:10.750 [585/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.750 [586/745] Linking target lib/librte_eal.so.23.0 00:02:10.750 [587/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:11.012 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:11.012 [589/745] Linking target lib/librte_ring.so.23.0 00:02:11.278 [590/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:11.278 [591/745] Linking target lib/librte_meter.so.23.0 00:02:11.278 [592/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:11.278 [593/745] Linking target lib/librte_pci.so.23.0 00:02:11.278 [594/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:11.544 [595/745] Linking target lib/librte_rcu.so.23.0 00:02:11.544 [596/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:11.544 [597/745] Linking target lib/librte_mempool.so.23.0 00:02:11.544 [598/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:11.544 [599/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:11.544 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.544 [601/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:11.544 [602/745] Linking target lib/librte_timer.so.23.0 00:02:11.810 [603/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:11.810 [604/745] Linking target lib/librte_acl.so.23.0 00:02:11.810 [605/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:11.810 [606/745] Linking target lib/librte_cfgfile.so.23.0 00:02:11.810 [607/745] Linking target lib/librte_jobstats.so.23.0 00:02:11.810 [608/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:11.810 [609/745] Linking target lib/librte_rawdev.so.23.0 00:02:11.810 [610/745] Linking target lib/librte_dmadev.so.23.0 00:02:11.810 [611/745] Linking target lib/librte_stack.so.23.0 00:02:11.810 [612/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:11.810 [613/745] Linking target lib/librte_graph.so.23.0 00:02:11.810 [614/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:11.810 [615/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:11.810 [616/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:11.810 [617/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:11.810 [618/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:11.810 [619/745] Linking target lib/librte_rib.so.23.0 00:02:11.810 [620/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:11.810 [621/745] Linking target lib/librte_mbuf.so.23.0 00:02:11.811 [622/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:11.811 [623/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:12.071 [624/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:12.071 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:12.071 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:12.071 [627/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:12.071 [628/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:12.071 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:12.071 [630/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:12.071 [631/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:12.071 [632/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:12.071 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:12.071 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:12.071 [635/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:12.071 [636/745] Linking target lib/librte_regexdev.so.23.0 00:02:12.071 [637/745] Linking target lib/librte_fib.so.23.0 00:02:12.071 [638/745] Linking target lib/librte_gpudev.so.23.0 00:02:12.071 [639/745] Linking target lib/librte_net.so.23.0 00:02:12.071 [640/745] Linking target lib/librte_distributor.so.23.0 00:02:12.071 [641/745] Linking target lib/librte_bbdev.so.23.0 00:02:12.071 [642/745] Linking target lib/librte_sched.so.23.0 00:02:12.071 [643/745] Linking target lib/librte_reorder.so.23.0 00:02:12.071 [644/745] Linking target lib/librte_compressdev.so.23.0 00:02:12.071 [645/745] Linking target lib/librte_cryptodev.so.23.0 00:02:12.071 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:12.330 [647/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:12.330 [648/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:12.330 [649/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:12.330 [650/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:12.330 [651/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:12.330 [652/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:12.330 [653/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:12.330 [654/745] Linking target lib/librte_security.so.23.0 00:02:12.330 [655/745] Linking target lib/librte_cmdline.so.23.0 00:02:12.330 [656/745] Linking target lib/librte_hash.so.23.0 00:02:12.330 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:12.330 [658/745] Linking target lib/librte_ethdev.so.23.0 00:02:12.590 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:12.591 [660/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:12.591 [661/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:12.591 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:12.591 [663/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:12.591 [664/745] Linking target lib/librte_efd.so.23.0 00:02:12.591 [665/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:12.591 [666/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:12.591 [667/745] Linking target lib/librte_member.so.23.0 00:02:12.591 [668/745] Linking target lib/librte_lpm.so.23.0 00:02:12.591 [669/745] Linking target lib/librte_ipsec.so.23.0 00:02:12.591 [670/745] Linking target lib/librte_metrics.so.23.0 00:02:12.591 [671/745] Linking target lib/librte_gso.so.23.0 00:02:12.591 [672/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:12.591 [673/745] Linking target lib/librte_pcapng.so.23.0 00:02:12.591 [674/745] Linking target lib/librte_gro.so.23.0 00:02:12.591 [675/745] Linking target lib/librte_power.so.23.0 00:02:12.591 [676/745] Linking target lib/librte_ip_frag.so.23.0 00:02:12.591 [677/745] Linking target lib/librte_eventdev.so.23.0 00:02:12.591 [678/745] Linking target lib/librte_bpf.so.23.0 00:02:12.591 [679/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:12.851 [680/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:12.851 [681/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:12.851 [682/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:12.851 [683/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:12.851 [684/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:12.851 [685/745] Linking target lib/librte_bitratestats.so.23.0 00:02:12.851 [686/745] Linking target lib/librte_latencystats.so.23.0 00:02:12.851 [687/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:12.851 [688/745] Linking target lib/librte_pdump.so.23.0 00:02:12.851 [689/745] Linking target lib/librte_port.so.23.0 00:02:12.851 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:13.110 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:13.110 [692/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:13.110 [693/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:13.110 [694/745] Linking target lib/librte_table.so.23.0 00:02:13.110 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:13.368 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:13.369 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:13.628 [698/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:13.628 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:13.628 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:13.887 [701/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:13.887 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:13.887 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:14.146 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:14.146 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:14.146 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:14.146 [707/745] Linking static target drivers/librte_net_i40e.a 00:02:14.404 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:14.664 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:14.664 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.664 [711/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:14.664 [712/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:15.600 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:15.600 [714/745] Linking static target lib/librte_node.a 00:02:15.858 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.858 [716/745] Linking target lib/librte_node.so.23.0 00:02:16.424 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:16.991 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:17.559 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:25.672 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.745 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.745 [722/745] Linking static target lib/librte_vhost.a 00:02:57.745 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.745 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:05.885 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.885 [726/745] Linking static target lib/librte_pipeline.a 00:03:06.144 [727/745] Linking target app/dpdk-test-acl 00:03:06.144 [728/745] Linking target app/dpdk-dumpcap 00:03:06.144 [729/745] Linking target app/dpdk-test-sad 00:03:06.144 [730/745] Linking target app/dpdk-test-fib 00:03:06.144 [731/745] Linking target app/dpdk-test-regex 00:03:06.144 [732/745] Linking target app/dpdk-test-gpudev 00:03:06.144 [733/745] Linking target app/dpdk-test-flow-perf 00:03:06.144 [734/745] Linking target app/dpdk-test-pipeline 00:03:06.144 [735/745] Linking target app/dpdk-test-security-perf 00:03:06.144 [736/745] Linking target app/dpdk-test-cmdline 00:03:06.144 [737/745] Linking target app/dpdk-test-eventdev 00:03:06.144 [738/745] Linking target app/dpdk-test-bbdev 00:03:06.144 [739/745] Linking target app/dpdk-test-compress-perf 00:03:06.144 [740/745] Linking target app/dpdk-pdump 00:03:06.144 [741/745] Linking target app/dpdk-proc-info 00:03:06.144 [742/745] Linking target app/dpdk-test-crypto-perf 00:03:06.144 [743/745] Linking target app/dpdk-testpmd 00:03:08.054 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.054 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:08.054 10:49:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:08.054 10:49:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.054 10:49:22 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:08.054 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:08.054 [0/1] Installing files. 00:03:08.314 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.314 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.580 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.581 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.582 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:08.583 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:08.583 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.583 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.584 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.161 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.161 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.161 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.161 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:09.161 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.161 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.162 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:09.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:09.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:09.165 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:09.165 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:09.165 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:09.165 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:09.165 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:09.165 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:09.165 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:09.165 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:09.165 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:09.165 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:09.165 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:09.165 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:09.165 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:09.165 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:09.165 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:09.165 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:09.165 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:09.165 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:09.165 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:09.165 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:09.165 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:09.165 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:09.165 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:09.165 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:09.165 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:09.165 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:09.165 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:09.165 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:09.165 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:09.165 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:09.165 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:09.165 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:09.165 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:09.165 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:09.165 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:09.165 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:09.165 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:09.165 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:09.165 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:09.165 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:09.165 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:09.165 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:09.165 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:09.165 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:09.165 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:09.165 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:09.165 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:09.165 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:09.165 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:09.165 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:09.165 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:09.165 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:09.165 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:09.165 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:09.165 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:09.165 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:09.165 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:09.165 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:09.165 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:09.165 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:09.165 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:09.165 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:09.165 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:09.165 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:09.165 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:09.165 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:09.165 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:09.165 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:09.165 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:09.165 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:09.165 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:09.165 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:09.165 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:09.165 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:09.166 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:09.166 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:09.166 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:09.166 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:09.166 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:09.166 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:09.166 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:09.166 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:09.166 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:09.166 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:09.166 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:09.166 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:09.166 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:09.166 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:09.166 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:09.166 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:09.166 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:09.166 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:09.166 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:09.166 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:09.166 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:09.166 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:09.166 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:09.166 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:09.166 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:09.166 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:09.166 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:09.166 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:09.166 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:09.166 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:09.166 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:09.166 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:09.166 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:09.166 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:09.166 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:09.166 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:09.166 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:09.166 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:09.166 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:09.166 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:09.166 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:09.166 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:09.166 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:09.166 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:09.166 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:09.166 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:09.166 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:09.166 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:09.166 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:09.166 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:09.166 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:09.166 10:49:23 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:09.166 10:49:23 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:09.166 00:03:09.166 real 1m21.209s 00:03:09.166 user 14m20.462s 00:03:09.166 sys 1m51.094s 00:03:09.166 10:49:23 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:09.166 10:49:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:09.166 ************************************ 00:03:09.166 END TEST build_native_dpdk 00:03:09.166 ************************************ 00:03:09.166 10:49:23 -- common/autotest_common.sh@1142 -- $ return 0 00:03:09.166 10:49:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:09.166 10:49:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:09.166 10:49:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:09.166 10:49:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:09.166 10:49:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:09.166 10:49:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:09.166 10:49:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:09.166 10:49:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:09.166 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:09.423 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.423 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.423 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:09.682 Using 'verbs' RDMA provider 00:03:20.617 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:30.607 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:30.607 Creating mk/config.mk...done. 00:03:30.607 Creating mk/cc.flags.mk...done. 00:03:30.607 Type 'make' to build. 00:03:30.607 10:49:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:30.607 10:49:44 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:30.607 10:49:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:30.607 10:49:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.607 ************************************ 00:03:30.607 START TEST make 00:03:30.607 ************************************ 00:03:30.607 10:49:44 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:30.607 make[1]: Nothing to be done for 'all'. 00:03:31.558 The Meson build system 00:03:31.558 Version: 1.3.1 00:03:31.558 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:31.558 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.558 Build type: native build 00:03:31.558 Project name: libvfio-user 00:03:31.558 Project version: 0.0.1 00:03:31.558 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:31.558 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:31.558 Host machine cpu family: x86_64 00:03:31.558 Host machine cpu: x86_64 00:03:31.558 Run-time dependency threads found: YES 00:03:31.558 Library dl found: YES 00:03:31.558 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:31.558 Run-time dependency json-c found: YES 0.17 00:03:31.558 Run-time dependency cmocka found: YES 1.1.7 00:03:31.558 Program pytest-3 found: NO 00:03:31.558 Program flake8 found: NO 00:03:31.558 Program misspell-fixer found: NO 00:03:31.558 Program restructuredtext-lint found: NO 00:03:31.558 Program valgrind found: YES (/usr/bin/valgrind) 00:03:31.558 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.558 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.558 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.558 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:31.558 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:31.558 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:31.558 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:31.558 Build targets in project: 8 00:03:31.558 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:31.558 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:31.558 00:03:31.558 libvfio-user 0.0.1 00:03:31.558 00:03:31.558 User defined options 00:03:31.558 buildtype : debug 00:03:31.558 default_library: shared 00:03:31.558 libdir : /usr/local/lib 00:03:31.558 00:03:31.558 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.520 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:32.520 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:32.520 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:32.520 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:32.520 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:32.520 [5/37] Compiling C object samples/null.p/null.c.o 00:03:32.520 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:32.520 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:32.520 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:32.520 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:32.520 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:32.520 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:32.520 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:32.782 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:32.782 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:32.782 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:32.782 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:32.782 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:32.782 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:32.782 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:32.782 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:32.782 [21/37] Compiling C object samples/client.p/client.c.o 00:03:32.782 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:32.782 [23/37] Compiling C object samples/server.p/server.c.o 00:03:32.782 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:32.782 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:32.782 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:32.782 [27/37] Linking target samples/client 00:03:33.048 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:33.048 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:33.048 [30/37] Linking target test/unit_tests 00:03:33.048 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:33.310 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:33.310 [33/37] Linking target samples/server 00:03:33.310 [34/37] Linking target samples/null 00:03:33.310 [35/37] Linking target samples/lspci 00:03:33.310 [36/37] Linking target samples/gpio-pci-idio-16 00:03:33.310 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:33.310 INFO: autodetecting backend as ninja 00:03:33.310 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:33.310 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:34.256 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:34.256 ninja: no work to do. 00:03:46.461 CC lib/ut_mock/mock.o 00:03:46.461 CC lib/ut/ut.o 00:03:46.461 CC lib/log/log.o 00:03:46.461 CC lib/log/log_flags.o 00:03:46.461 CC lib/log/log_deprecated.o 00:03:46.461 LIB libspdk_log.a 00:03:46.461 LIB libspdk_ut.a 00:03:46.461 LIB libspdk_ut_mock.a 00:03:46.461 SO libspdk_ut_mock.so.6.0 00:03:46.461 SO libspdk_ut.so.2.0 00:03:46.461 SO libspdk_log.so.7.0 00:03:46.461 SYMLINK libspdk_ut_mock.so 00:03:46.461 SYMLINK libspdk_ut.so 00:03:46.461 SYMLINK libspdk_log.so 00:03:46.461 CC lib/dma/dma.o 00:03:46.461 CXX lib/trace_parser/trace.o 00:03:46.461 CC lib/ioat/ioat.o 00:03:46.461 CC lib/util/base64.o 00:03:46.461 CC lib/util/bit_array.o 00:03:46.461 CC lib/util/cpuset.o 00:03:46.461 CC lib/util/crc16.o 00:03:46.461 CC lib/util/crc32.o 00:03:46.461 CC lib/util/crc32c.o 00:03:46.461 CC lib/util/crc32_ieee.o 00:03:46.461 CC lib/util/crc64.o 00:03:46.461 CC lib/util/dif.o 00:03:46.461 CC lib/util/fd.o 00:03:46.461 CC lib/util/file.o 00:03:46.461 CC lib/util/hexlify.o 00:03:46.461 CC lib/util/iov.o 00:03:46.461 CC lib/util/math.o 00:03:46.461 CC lib/util/pipe.o 00:03:46.461 CC lib/util/strerror_tls.o 00:03:46.461 CC lib/util/string.o 00:03:46.461 CC lib/util/uuid.o 00:03:46.461 CC lib/util/fd_group.o 00:03:46.461 CC lib/util/xor.o 00:03:46.461 CC lib/util/zipf.o 00:03:46.461 CC lib/vfio_user/host/vfio_user.o 00:03:46.461 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.461 LIB libspdk_dma.a 00:03:46.461 LIB libspdk_ioat.a 00:03:46.461 SO libspdk_dma.so.4.0 00:03:46.461 SO libspdk_ioat.so.7.0 00:03:46.461 SYMLINK libspdk_dma.so 00:03:46.461 SYMLINK libspdk_ioat.so 00:03:46.461 LIB libspdk_vfio_user.a 00:03:46.461 SO libspdk_vfio_user.so.5.0 00:03:46.461 SYMLINK libspdk_vfio_user.so 00:03:46.461 LIB libspdk_util.a 00:03:46.461 SO libspdk_util.so.9.1 00:03:46.461 SYMLINK libspdk_util.so 00:03:46.461 CC lib/conf/conf.o 00:03:46.461 CC lib/rdma_utils/rdma_utils.o 00:03:46.461 CC lib/rdma_provider/common.o 00:03:46.461 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:46.461 CC lib/json/json_parse.o 00:03:46.461 CC lib/vmd/vmd.o 00:03:46.461 CC lib/idxd/idxd.o 00:03:46.461 CC lib/json/json_util.o 00:03:46.461 CC lib/idxd/idxd_user.o 00:03:46.461 CC lib/env_dpdk/env.o 00:03:46.461 CC lib/vmd/led.o 00:03:46.461 CC lib/idxd/idxd_kernel.o 00:03:46.461 CC lib/env_dpdk/memory.o 00:03:46.461 CC lib/json/json_write.o 00:03:46.461 CC lib/env_dpdk/pci.o 00:03:46.461 CC lib/env_dpdk/init.o 00:03:46.461 CC lib/env_dpdk/threads.o 00:03:46.461 CC lib/env_dpdk/pci_ioat.o 00:03:46.461 CC lib/env_dpdk/pci_virtio.o 00:03:46.461 CC lib/env_dpdk/pci_vmd.o 00:03:46.461 CC lib/env_dpdk/pci_idxd.o 00:03:46.461 CC lib/env_dpdk/pci_event.o 00:03:46.461 CC lib/env_dpdk/sigbus_handler.o 00:03:46.461 CC lib/env_dpdk/pci_dpdk.o 00:03:46.461 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:46.461 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:46.461 LIB libspdk_rdma_provider.a 00:03:46.461 SO libspdk_rdma_provider.so.6.0 00:03:46.461 LIB libspdk_rdma_utils.a 00:03:46.461 SYMLINK libspdk_rdma_provider.so 00:03:46.461 SO libspdk_rdma_utils.so.1.0 00:03:46.461 LIB libspdk_conf.a 00:03:46.461 SO libspdk_conf.so.6.0 00:03:46.719 SYMLINK libspdk_rdma_utils.so 00:03:46.719 LIB libspdk_json.a 00:03:46.719 SYMLINK libspdk_conf.so 00:03:46.719 SO libspdk_json.so.6.0 00:03:46.719 SYMLINK libspdk_json.so 00:03:46.719 LIB libspdk_idxd.a 00:03:46.719 SO libspdk_idxd.so.12.0 00:03:46.977 CC lib/jsonrpc/jsonrpc_server.o 00:03:46.977 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:46.977 CC lib/jsonrpc/jsonrpc_client.o 00:03:46.977 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:46.977 SYMLINK libspdk_idxd.so 00:03:46.977 LIB libspdk_vmd.a 00:03:46.977 SO libspdk_vmd.so.6.0 00:03:46.977 SYMLINK libspdk_vmd.so 00:03:47.235 LIB libspdk_jsonrpc.a 00:03:47.235 LIB libspdk_trace_parser.a 00:03:47.235 SO libspdk_jsonrpc.so.6.0 00:03:47.235 SO libspdk_trace_parser.so.5.0 00:03:47.236 SYMLINK libspdk_jsonrpc.so 00:03:47.236 SYMLINK libspdk_trace_parser.so 00:03:47.494 CC lib/rpc/rpc.o 00:03:47.753 LIB libspdk_rpc.a 00:03:47.753 SO libspdk_rpc.so.6.0 00:03:47.753 SYMLINK libspdk_rpc.so 00:03:47.753 CC lib/notify/notify.o 00:03:47.753 CC lib/trace/trace.o 00:03:47.753 CC lib/notify/notify_rpc.o 00:03:47.753 CC lib/trace/trace_flags.o 00:03:47.753 CC lib/trace/trace_rpc.o 00:03:47.753 CC lib/keyring/keyring.o 00:03:48.012 CC lib/keyring/keyring_rpc.o 00:03:48.012 LIB libspdk_notify.a 00:03:48.012 SO libspdk_notify.so.6.0 00:03:48.012 LIB libspdk_keyring.a 00:03:48.012 SYMLINK libspdk_notify.so 00:03:48.012 LIB libspdk_trace.a 00:03:48.270 SO libspdk_keyring.so.1.0 00:03:48.270 SO libspdk_trace.so.10.0 00:03:48.270 SYMLINK libspdk_keyring.so 00:03:48.270 SYMLINK libspdk_trace.so 00:03:48.270 LIB libspdk_env_dpdk.a 00:03:48.270 SO libspdk_env_dpdk.so.14.1 00:03:48.270 CC lib/thread/thread.o 00:03:48.270 CC lib/thread/iobuf.o 00:03:48.529 CC lib/sock/sock.o 00:03:48.529 CC lib/sock/sock_rpc.o 00:03:48.529 SYMLINK libspdk_env_dpdk.so 00:03:48.788 LIB libspdk_sock.a 00:03:48.788 SO libspdk_sock.so.10.0 00:03:48.788 SYMLINK libspdk_sock.so 00:03:49.047 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.047 CC lib/nvme/nvme_ctrlr.o 00:03:49.047 CC lib/nvme/nvme_fabric.o 00:03:49.047 CC lib/nvme/nvme_ns_cmd.o 00:03:49.047 CC lib/nvme/nvme_ns.o 00:03:49.047 CC lib/nvme/nvme_pcie_common.o 00:03:49.047 CC lib/nvme/nvme_pcie.o 00:03:49.048 CC lib/nvme/nvme_qpair.o 00:03:49.048 CC lib/nvme/nvme.o 00:03:49.048 CC lib/nvme/nvme_quirks.o 00:03:49.048 CC lib/nvme/nvme_transport.o 00:03:49.048 CC lib/nvme/nvme_discovery.o 00:03:49.048 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.048 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.048 CC lib/nvme/nvme_tcp.o 00:03:49.048 CC lib/nvme/nvme_opal.o 00:03:49.048 CC lib/nvme/nvme_io_msg.o 00:03:49.048 CC lib/nvme/nvme_poll_group.o 00:03:49.048 CC lib/nvme/nvme_zns.o 00:03:49.048 CC lib/nvme/nvme_stubs.o 00:03:49.048 CC lib/nvme/nvme_auth.o 00:03:49.048 CC lib/nvme/nvme_cuse.o 00:03:49.048 CC lib/nvme/nvme_vfio_user.o 00:03:49.048 CC lib/nvme/nvme_rdma.o 00:03:49.988 LIB libspdk_thread.a 00:03:49.988 SO libspdk_thread.so.10.1 00:03:49.988 SYMLINK libspdk_thread.so 00:03:50.247 CC lib/accel/accel.o 00:03:50.247 CC lib/accel/accel_rpc.o 00:03:50.247 CC lib/vfu_tgt/tgt_endpoint.o 00:03:50.247 CC lib/virtio/virtio.o 00:03:50.247 CC lib/init/json_config.o 00:03:50.247 CC lib/accel/accel_sw.o 00:03:50.247 CC lib/vfu_tgt/tgt_rpc.o 00:03:50.247 CC lib/virtio/virtio_vhost_user.o 00:03:50.247 CC lib/init/subsystem.o 00:03:50.247 CC lib/init/subsystem_rpc.o 00:03:50.247 CC lib/virtio/virtio_vfio_user.o 00:03:50.247 CC lib/blob/blobstore.o 00:03:50.247 CC lib/virtio/virtio_pci.o 00:03:50.247 CC lib/blob/request.o 00:03:50.247 CC lib/init/rpc.o 00:03:50.247 CC lib/blob/zeroes.o 00:03:50.247 CC lib/blob/blob_bs_dev.o 00:03:50.505 LIB libspdk_init.a 00:03:50.505 SO libspdk_init.so.5.0 00:03:50.505 LIB libspdk_virtio.a 00:03:50.505 SYMLINK libspdk_init.so 00:03:50.505 SO libspdk_virtio.so.7.0 00:03:50.505 LIB libspdk_vfu_tgt.a 00:03:50.763 SO libspdk_vfu_tgt.so.3.0 00:03:50.763 SYMLINK libspdk_virtio.so 00:03:50.763 SYMLINK libspdk_vfu_tgt.so 00:03:50.763 CC lib/event/app.o 00:03:50.763 CC lib/event/reactor.o 00:03:50.763 CC lib/event/log_rpc.o 00:03:50.763 CC lib/event/app_rpc.o 00:03:50.763 CC lib/event/scheduler_static.o 00:03:51.329 LIB libspdk_event.a 00:03:51.329 SO libspdk_event.so.14.0 00:03:51.329 LIB libspdk_accel.a 00:03:51.329 SYMLINK libspdk_event.so 00:03:51.329 SO libspdk_accel.so.15.1 00:03:51.329 SYMLINK libspdk_accel.so 00:03:51.329 LIB libspdk_nvme.a 00:03:51.587 CC lib/bdev/bdev.o 00:03:51.587 CC lib/bdev/bdev_rpc.o 00:03:51.587 CC lib/bdev/bdev_zone.o 00:03:51.587 CC lib/bdev/part.o 00:03:51.587 CC lib/bdev/scsi_nvme.o 00:03:51.587 SO libspdk_nvme.so.13.1 00:03:51.845 SYMLINK libspdk_nvme.so 00:03:53.217 LIB libspdk_blob.a 00:03:53.217 SO libspdk_blob.so.11.0 00:03:53.475 SYMLINK libspdk_blob.so 00:03:53.475 CC lib/blobfs/blobfs.o 00:03:53.475 CC lib/blobfs/tree.o 00:03:53.475 CC lib/lvol/lvol.o 00:03:54.042 LIB libspdk_bdev.a 00:03:54.042 SO libspdk_bdev.so.15.1 00:03:54.306 SYMLINK libspdk_bdev.so 00:03:54.306 LIB libspdk_blobfs.a 00:03:54.306 SO libspdk_blobfs.so.10.0 00:03:54.306 CC lib/nbd/nbd.o 00:03:54.306 CC lib/ublk/ublk.o 00:03:54.306 CC lib/nbd/nbd_rpc.o 00:03:54.307 CC lib/ublk/ublk_rpc.o 00:03:54.307 CC lib/scsi/dev.o 00:03:54.307 CC lib/scsi/lun.o 00:03:54.307 CC lib/scsi/port.o 00:03:54.307 CC lib/nvmf/ctrlr.o 00:03:54.307 CC lib/scsi/scsi.o 00:03:54.307 CC lib/nvmf/ctrlr_discovery.o 00:03:54.307 CC lib/ftl/ftl_core.o 00:03:54.307 CC lib/scsi/scsi_bdev.o 00:03:54.307 CC lib/nvmf/ctrlr_bdev.o 00:03:54.307 CC lib/ftl/ftl_init.o 00:03:54.307 CC lib/scsi/scsi_pr.o 00:03:54.307 CC lib/nvmf/subsystem.o 00:03:54.307 CC lib/scsi/scsi_rpc.o 00:03:54.307 CC lib/ftl/ftl_layout.o 00:03:54.307 CC lib/scsi/task.o 00:03:54.307 CC lib/ftl/ftl_debug.o 00:03:54.307 CC lib/nvmf/nvmf.o 00:03:54.307 CC lib/nvmf/nvmf_rpc.o 00:03:54.307 CC lib/ftl/ftl_io.o 00:03:54.307 CC lib/ftl/ftl_sb.o 00:03:54.307 CC lib/nvmf/transport.o 00:03:54.307 CC lib/nvmf/tcp.o 00:03:54.307 CC lib/ftl/ftl_l2p.o 00:03:54.307 CC lib/ftl/ftl_l2p_flat.o 00:03:54.307 CC lib/nvmf/stubs.o 00:03:54.307 CC lib/nvmf/mdns_server.o 00:03:54.307 CC lib/nvmf/vfio_user.o 00:03:54.307 CC lib/ftl/ftl_nv_cache.o 00:03:54.307 CC lib/nvmf/rdma.o 00:03:54.307 CC lib/ftl/ftl_band.o 00:03:54.307 CC lib/nvmf/auth.o 00:03:54.307 CC lib/ftl/ftl_band_ops.o 00:03:54.307 CC lib/ftl/ftl_writer.o 00:03:54.307 CC lib/ftl/ftl_rq.o 00:03:54.307 CC lib/ftl/ftl_reloc.o 00:03:54.307 CC lib/ftl/ftl_l2p_cache.o 00:03:54.307 CC lib/ftl/ftl_p2l.o 00:03:54.307 CC lib/ftl/mngt/ftl_mngt.o 00:03:54.307 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:54.307 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:54.307 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:54.307 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:54.307 SYMLINK libspdk_blobfs.so 00:03:54.570 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:54.570 LIB libspdk_lvol.a 00:03:54.570 SO libspdk_lvol.so.10.0 00:03:54.836 SYMLINK libspdk_lvol.so 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:54.836 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:54.836 CC lib/ftl/utils/ftl_conf.o 00:03:54.836 CC lib/ftl/utils/ftl_md.o 00:03:54.836 CC lib/ftl/utils/ftl_mempool.o 00:03:54.836 CC lib/ftl/utils/ftl_bitmap.o 00:03:54.836 CC lib/ftl/utils/ftl_property.o 00:03:54.836 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:54.836 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:54.836 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:54.836 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:54.836 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:54.836 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:54.836 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:55.096 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:55.096 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:55.096 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:55.096 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:55.096 CC lib/ftl/base/ftl_base_dev.o 00:03:55.096 CC lib/ftl/base/ftl_base_bdev.o 00:03:55.096 CC lib/ftl/ftl_trace.o 00:03:55.096 LIB libspdk_nbd.a 00:03:55.355 SO libspdk_nbd.so.7.0 00:03:55.355 SYMLINK libspdk_nbd.so 00:03:55.355 LIB libspdk_scsi.a 00:03:55.355 SO libspdk_scsi.so.9.0 00:03:55.613 SYMLINK libspdk_scsi.so 00:03:55.613 LIB libspdk_ublk.a 00:03:55.613 SO libspdk_ublk.so.3.0 00:03:55.613 SYMLINK libspdk_ublk.so 00:03:55.613 CC lib/vhost/vhost.o 00:03:55.613 CC lib/iscsi/conn.o 00:03:55.613 CC lib/iscsi/init_grp.o 00:03:55.613 CC lib/vhost/vhost_rpc.o 00:03:55.613 CC lib/vhost/vhost_scsi.o 00:03:55.613 CC lib/iscsi/iscsi.o 00:03:55.613 CC lib/vhost/vhost_blk.o 00:03:55.613 CC lib/iscsi/md5.o 00:03:55.613 CC lib/vhost/rte_vhost_user.o 00:03:55.613 CC lib/iscsi/param.o 00:03:55.613 CC lib/iscsi/portal_grp.o 00:03:55.613 CC lib/iscsi/tgt_node.o 00:03:55.613 CC lib/iscsi/iscsi_subsystem.o 00:03:55.613 CC lib/iscsi/iscsi_rpc.o 00:03:55.613 CC lib/iscsi/task.o 00:03:55.871 LIB libspdk_ftl.a 00:03:56.130 SO libspdk_ftl.so.9.0 00:03:56.388 SYMLINK libspdk_ftl.so 00:03:56.955 LIB libspdk_vhost.a 00:03:56.955 SO libspdk_vhost.so.8.0 00:03:56.955 SYMLINK libspdk_vhost.so 00:03:56.955 LIB libspdk_nvmf.a 00:03:57.216 LIB libspdk_iscsi.a 00:03:57.216 SO libspdk_nvmf.so.18.1 00:03:57.216 SO libspdk_iscsi.so.8.0 00:03:57.216 SYMLINK libspdk_iscsi.so 00:03:57.216 SYMLINK libspdk_nvmf.so 00:03:57.475 CC module/env_dpdk/env_dpdk_rpc.o 00:03:57.475 CC module/vfu_device/vfu_virtio.o 00:03:57.475 CC module/vfu_device/vfu_virtio_blk.o 00:03:57.475 CC module/vfu_device/vfu_virtio_scsi.o 00:03:57.475 CC module/vfu_device/vfu_virtio_rpc.o 00:03:57.733 CC module/accel/error/accel_error.o 00:03:57.733 CC module/sock/posix/posix.o 00:03:57.733 CC module/accel/dsa/accel_dsa.o 00:03:57.733 CC module/accel/error/accel_error_rpc.o 00:03:57.733 CC module/accel/dsa/accel_dsa_rpc.o 00:03:57.733 CC module/keyring/file/keyring.o 00:03:57.733 CC module/keyring/file/keyring_rpc.o 00:03:57.733 CC module/scheduler/gscheduler/gscheduler.o 00:03:57.733 CC module/keyring/linux/keyring.o 00:03:57.733 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:57.733 CC module/keyring/linux/keyring_rpc.o 00:03:57.733 CC module/accel/iaa/accel_iaa.o 00:03:57.733 CC module/blob/bdev/blob_bdev.o 00:03:57.733 CC module/accel/iaa/accel_iaa_rpc.o 00:03:57.733 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:57.733 CC module/accel/ioat/accel_ioat.o 00:03:57.734 CC module/accel/ioat/accel_ioat_rpc.o 00:03:57.734 LIB libspdk_env_dpdk_rpc.a 00:03:57.734 SO libspdk_env_dpdk_rpc.so.6.0 00:03:57.734 SYMLINK libspdk_env_dpdk_rpc.so 00:03:57.734 LIB libspdk_keyring_linux.a 00:03:57.734 LIB libspdk_keyring_file.a 00:03:57.734 LIB libspdk_scheduler_gscheduler.a 00:03:57.734 LIB libspdk_scheduler_dpdk_governor.a 00:03:57.734 SO libspdk_keyring_linux.so.1.0 00:03:57.734 SO libspdk_keyring_file.so.1.0 00:03:57.734 SO libspdk_scheduler_gscheduler.so.4.0 00:03:57.992 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:57.992 LIB libspdk_accel_error.a 00:03:57.992 LIB libspdk_scheduler_dynamic.a 00:03:57.992 LIB libspdk_accel_ioat.a 00:03:57.992 LIB libspdk_accel_iaa.a 00:03:57.992 SO libspdk_scheduler_dynamic.so.4.0 00:03:57.992 SO libspdk_accel_error.so.2.0 00:03:57.992 SO libspdk_accel_ioat.so.6.0 00:03:57.992 SYMLINK libspdk_scheduler_gscheduler.so 00:03:57.992 SYMLINK libspdk_keyring_file.so 00:03:57.992 SYMLINK libspdk_keyring_linux.so 00:03:57.992 SO libspdk_accel_iaa.so.3.0 00:03:57.992 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:57.992 LIB libspdk_accel_dsa.a 00:03:57.992 SYMLINK libspdk_scheduler_dynamic.so 00:03:57.992 SYMLINK libspdk_accel_error.so 00:03:57.992 SYMLINK libspdk_accel_ioat.so 00:03:57.992 LIB libspdk_blob_bdev.a 00:03:57.992 SO libspdk_accel_dsa.so.5.0 00:03:57.992 SYMLINK libspdk_accel_iaa.so 00:03:57.992 SO libspdk_blob_bdev.so.11.0 00:03:57.992 SYMLINK libspdk_accel_dsa.so 00:03:57.992 SYMLINK libspdk_blob_bdev.so 00:03:58.253 LIB libspdk_vfu_device.a 00:03:58.253 SO libspdk_vfu_device.so.3.0 00:03:58.253 CC module/bdev/delay/vbdev_delay.o 00:03:58.253 CC module/bdev/null/bdev_null.o 00:03:58.253 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:58.253 CC module/bdev/null/bdev_null_rpc.o 00:03:58.253 CC module/blobfs/bdev/blobfs_bdev.o 00:03:58.253 CC module/bdev/error/vbdev_error.o 00:03:58.253 CC module/bdev/malloc/bdev_malloc.o 00:03:58.253 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:58.253 CC module/bdev/nvme/bdev_nvme.o 00:03:58.253 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:58.253 CC module/bdev/error/vbdev_error_rpc.o 00:03:58.253 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:58.253 CC module/bdev/gpt/gpt.o 00:03:58.253 CC module/bdev/gpt/vbdev_gpt.o 00:03:58.253 CC module/bdev/nvme/nvme_rpc.o 00:03:58.253 CC module/bdev/aio/bdev_aio.o 00:03:58.253 CC module/bdev/nvme/bdev_mdns_client.o 00:03:58.253 CC module/bdev/passthru/vbdev_passthru.o 00:03:58.253 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:58.253 CC module/bdev/ftl/bdev_ftl.o 00:03:58.253 CC module/bdev/split/vbdev_split.o 00:03:58.253 CC module/bdev/nvme/vbdev_opal.o 00:03:58.253 CC module/bdev/aio/bdev_aio_rpc.o 00:03:58.253 CC module/bdev/lvol/vbdev_lvol.o 00:03:58.253 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:58.253 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:58.253 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:58.253 CC module/bdev/split/vbdev_split_rpc.o 00:03:58.253 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:58.253 CC module/bdev/raid/bdev_raid.o 00:03:58.253 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:58.253 CC module/bdev/raid/bdev_raid_rpc.o 00:03:58.253 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:58.253 CC module/bdev/raid/bdev_raid_sb.o 00:03:58.253 CC module/bdev/raid/raid0.o 00:03:58.253 CC module/bdev/iscsi/bdev_iscsi.o 00:03:58.253 CC module/bdev/raid/raid1.o 00:03:58.253 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:58.253 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:58.253 CC module/bdev/raid/concat.o 00:03:58.253 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:58.253 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:58.539 SYMLINK libspdk_vfu_device.so 00:03:58.539 LIB libspdk_sock_posix.a 00:03:58.539 SO libspdk_sock_posix.so.6.0 00:03:58.539 SYMLINK libspdk_sock_posix.so 00:03:58.797 LIB libspdk_blobfs_bdev.a 00:03:58.798 SO libspdk_blobfs_bdev.so.6.0 00:03:58.798 LIB libspdk_bdev_split.a 00:03:58.798 SYMLINK libspdk_blobfs_bdev.so 00:03:58.798 LIB libspdk_bdev_ftl.a 00:03:58.798 LIB libspdk_bdev_gpt.a 00:03:58.798 SO libspdk_bdev_split.so.6.0 00:03:58.798 LIB libspdk_bdev_null.a 00:03:58.798 LIB libspdk_bdev_error.a 00:03:58.798 SO libspdk_bdev_gpt.so.6.0 00:03:58.798 SO libspdk_bdev_ftl.so.6.0 00:03:58.798 SO libspdk_bdev_null.so.6.0 00:03:58.798 LIB libspdk_bdev_passthru.a 00:03:58.798 LIB libspdk_bdev_aio.a 00:03:58.798 SO libspdk_bdev_error.so.6.0 00:03:58.798 SYMLINK libspdk_bdev_split.so 00:03:58.798 LIB libspdk_bdev_malloc.a 00:03:58.798 LIB libspdk_bdev_iscsi.a 00:03:58.798 SO libspdk_bdev_passthru.so.6.0 00:03:58.798 SO libspdk_bdev_aio.so.6.0 00:03:58.798 SYMLINK libspdk_bdev_gpt.so 00:03:58.798 SYMLINK libspdk_bdev_ftl.so 00:03:58.798 SYMLINK libspdk_bdev_null.so 00:03:58.798 SO libspdk_bdev_malloc.so.6.0 00:03:58.798 SO libspdk_bdev_iscsi.so.6.0 00:03:58.798 LIB libspdk_bdev_zone_block.a 00:03:58.798 SYMLINK libspdk_bdev_error.so 00:03:58.798 LIB libspdk_bdev_lvol.a 00:03:59.056 LIB libspdk_bdev_delay.a 00:03:59.056 SO libspdk_bdev_zone_block.so.6.0 00:03:59.056 SYMLINK libspdk_bdev_aio.so 00:03:59.056 SYMLINK libspdk_bdev_passthru.so 00:03:59.056 SO libspdk_bdev_lvol.so.6.0 00:03:59.056 SO libspdk_bdev_delay.so.6.0 00:03:59.056 SYMLINK libspdk_bdev_malloc.so 00:03:59.056 SYMLINK libspdk_bdev_iscsi.so 00:03:59.056 SYMLINK libspdk_bdev_zone_block.so 00:03:59.056 SYMLINK libspdk_bdev_lvol.so 00:03:59.056 SYMLINK libspdk_bdev_delay.so 00:03:59.056 LIB libspdk_bdev_virtio.a 00:03:59.056 SO libspdk_bdev_virtio.so.6.0 00:03:59.056 SYMLINK libspdk_bdev_virtio.so 00:03:59.628 LIB libspdk_bdev_raid.a 00:03:59.628 SO libspdk_bdev_raid.so.6.0 00:03:59.628 SYMLINK libspdk_bdev_raid.so 00:04:00.568 LIB libspdk_bdev_nvme.a 00:04:00.568 SO libspdk_bdev_nvme.so.7.0 00:04:00.826 SYMLINK libspdk_bdev_nvme.so 00:04:01.084 CC module/event/subsystems/sock/sock.o 00:04:01.084 CC module/event/subsystems/iobuf/iobuf.o 00:04:01.084 CC module/event/subsystems/keyring/keyring.o 00:04:01.084 CC module/event/subsystems/vmd/vmd.o 00:04:01.084 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:01.084 CC module/event/subsystems/scheduler/scheduler.o 00:04:01.084 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:01.084 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:01.084 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:01.344 LIB libspdk_event_keyring.a 00:04:01.344 LIB libspdk_event_vhost_blk.a 00:04:01.344 LIB libspdk_event_vfu_tgt.a 00:04:01.344 LIB libspdk_event_scheduler.a 00:04:01.344 LIB libspdk_event_vmd.a 00:04:01.344 LIB libspdk_event_sock.a 00:04:01.344 LIB libspdk_event_iobuf.a 00:04:01.344 SO libspdk_event_keyring.so.1.0 00:04:01.344 SO libspdk_event_vfu_tgt.so.3.0 00:04:01.344 SO libspdk_event_scheduler.so.4.0 00:04:01.344 SO libspdk_event_vhost_blk.so.3.0 00:04:01.344 SO libspdk_event_vmd.so.6.0 00:04:01.344 SO libspdk_event_sock.so.5.0 00:04:01.344 SO libspdk_event_iobuf.so.3.0 00:04:01.344 SYMLINK libspdk_event_keyring.so 00:04:01.344 SYMLINK libspdk_event_scheduler.so 00:04:01.344 SYMLINK libspdk_event_vfu_tgt.so 00:04:01.344 SYMLINK libspdk_event_vhost_blk.so 00:04:01.344 SYMLINK libspdk_event_sock.so 00:04:01.344 SYMLINK libspdk_event_vmd.so 00:04:01.344 SYMLINK libspdk_event_iobuf.so 00:04:01.603 CC module/event/subsystems/accel/accel.o 00:04:01.603 LIB libspdk_event_accel.a 00:04:01.603 SO libspdk_event_accel.so.6.0 00:04:01.861 SYMLINK libspdk_event_accel.so 00:04:01.861 CC module/event/subsystems/bdev/bdev.o 00:04:02.120 LIB libspdk_event_bdev.a 00:04:02.120 SO libspdk_event_bdev.so.6.0 00:04:02.120 SYMLINK libspdk_event_bdev.so 00:04:02.379 CC module/event/subsystems/ublk/ublk.o 00:04:02.379 CC module/event/subsystems/nbd/nbd.o 00:04:02.379 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:02.379 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:02.379 CC module/event/subsystems/scsi/scsi.o 00:04:02.638 LIB libspdk_event_ublk.a 00:04:02.638 LIB libspdk_event_nbd.a 00:04:02.638 LIB libspdk_event_scsi.a 00:04:02.638 SO libspdk_event_nbd.so.6.0 00:04:02.638 SO libspdk_event_ublk.so.3.0 00:04:02.638 SO libspdk_event_scsi.so.6.0 00:04:02.638 SYMLINK libspdk_event_nbd.so 00:04:02.638 SYMLINK libspdk_event_ublk.so 00:04:02.638 SYMLINK libspdk_event_scsi.so 00:04:02.638 LIB libspdk_event_nvmf.a 00:04:02.638 SO libspdk_event_nvmf.so.6.0 00:04:02.638 SYMLINK libspdk_event_nvmf.so 00:04:02.897 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.897 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.897 LIB libspdk_event_vhost_scsi.a 00:04:02.897 LIB libspdk_event_iscsi.a 00:04:02.897 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.897 SO libspdk_event_iscsi.so.6.0 00:04:02.897 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.897 SYMLINK libspdk_event_iscsi.so 00:04:03.157 SO libspdk.so.6.0 00:04:03.157 SYMLINK libspdk.so 00:04:03.423 CXX app/trace/trace.o 00:04:03.423 CC app/trace_record/trace_record.o 00:04:03.423 CC app/spdk_lspci/spdk_lspci.o 00:04:03.423 CC app/spdk_nvme_perf/perf.o 00:04:03.423 TEST_HEADER include/spdk/accel.h 00:04:03.423 TEST_HEADER include/spdk/accel_module.h 00:04:03.423 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.423 TEST_HEADER include/spdk/assert.h 00:04:03.423 TEST_HEADER include/spdk/barrier.h 00:04:03.423 CC test/rpc_client/rpc_client_test.o 00:04:03.423 TEST_HEADER include/spdk/base64.h 00:04:03.423 TEST_HEADER include/spdk/bdev.h 00:04:03.423 TEST_HEADER include/spdk/bdev_module.h 00:04:03.423 CC app/spdk_top/spdk_top.o 00:04:03.423 TEST_HEADER include/spdk/bdev_zone.h 00:04:03.423 CC app/spdk_nvme_identify/identify.o 00:04:03.423 TEST_HEADER include/spdk/bit_array.h 00:04:03.423 TEST_HEADER include/spdk/bit_pool.h 00:04:03.423 TEST_HEADER include/spdk/blob_bdev.h 00:04:03.423 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:03.423 TEST_HEADER include/spdk/blobfs.h 00:04:03.423 TEST_HEADER include/spdk/blob.h 00:04:03.423 TEST_HEADER include/spdk/conf.h 00:04:03.423 TEST_HEADER include/spdk/config.h 00:04:03.423 TEST_HEADER include/spdk/cpuset.h 00:04:03.423 TEST_HEADER include/spdk/crc16.h 00:04:03.423 TEST_HEADER include/spdk/crc32.h 00:04:03.423 TEST_HEADER include/spdk/crc64.h 00:04:03.423 TEST_HEADER include/spdk/dma.h 00:04:03.423 TEST_HEADER include/spdk/dif.h 00:04:03.423 TEST_HEADER include/spdk/endian.h 00:04:03.423 TEST_HEADER include/spdk/env_dpdk.h 00:04:03.423 TEST_HEADER include/spdk/env.h 00:04:03.423 TEST_HEADER include/spdk/event.h 00:04:03.423 TEST_HEADER include/spdk/fd_group.h 00:04:03.423 TEST_HEADER include/spdk/fd.h 00:04:03.423 TEST_HEADER include/spdk/file.h 00:04:03.423 TEST_HEADER include/spdk/ftl.h 00:04:03.423 TEST_HEADER include/spdk/gpt_spec.h 00:04:03.423 TEST_HEADER include/spdk/hexlify.h 00:04:03.423 TEST_HEADER include/spdk/histogram_data.h 00:04:03.423 TEST_HEADER include/spdk/idxd.h 00:04:03.423 TEST_HEADER include/spdk/idxd_spec.h 00:04:03.423 TEST_HEADER include/spdk/init.h 00:04:03.423 TEST_HEADER include/spdk/ioat.h 00:04:03.423 TEST_HEADER include/spdk/ioat_spec.h 00:04:03.423 TEST_HEADER include/spdk/iscsi_spec.h 00:04:03.423 TEST_HEADER include/spdk/json.h 00:04:03.423 TEST_HEADER include/spdk/jsonrpc.h 00:04:03.423 TEST_HEADER include/spdk/keyring.h 00:04:03.423 TEST_HEADER include/spdk/keyring_module.h 00:04:03.423 TEST_HEADER include/spdk/likely.h 00:04:03.423 TEST_HEADER include/spdk/log.h 00:04:03.423 TEST_HEADER include/spdk/lvol.h 00:04:03.423 TEST_HEADER include/spdk/memory.h 00:04:03.423 TEST_HEADER include/spdk/mmio.h 00:04:03.423 TEST_HEADER include/spdk/nbd.h 00:04:03.423 TEST_HEADER include/spdk/notify.h 00:04:03.423 TEST_HEADER include/spdk/nvme.h 00:04:03.423 TEST_HEADER include/spdk/nvme_intel.h 00:04:03.423 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:03.423 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvme_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvme_zns.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:03.423 TEST_HEADER include/spdk/nvmf.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_spec.h 00:04:03.423 TEST_HEADER include/spdk/opal.h 00:04:03.423 TEST_HEADER include/spdk/nvmf_transport.h 00:04:03.423 TEST_HEADER include/spdk/opal_spec.h 00:04:03.423 TEST_HEADER include/spdk/pci_ids.h 00:04:03.423 TEST_HEADER include/spdk/pipe.h 00:04:03.423 TEST_HEADER include/spdk/queue.h 00:04:03.423 TEST_HEADER include/spdk/reduce.h 00:04:03.423 TEST_HEADER include/spdk/rpc.h 00:04:03.423 TEST_HEADER include/spdk/scheduler.h 00:04:03.423 TEST_HEADER include/spdk/scsi.h 00:04:03.423 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.423 TEST_HEADER include/spdk/sock.h 00:04:03.423 TEST_HEADER include/spdk/stdinc.h 00:04:03.423 TEST_HEADER include/spdk/string.h 00:04:03.423 TEST_HEADER include/spdk/thread.h 00:04:03.423 TEST_HEADER include/spdk/trace.h 00:04:03.423 TEST_HEADER include/spdk/trace_parser.h 00:04:03.423 TEST_HEADER include/spdk/tree.h 00:04:03.424 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:03.424 TEST_HEADER include/spdk/ublk.h 00:04:03.424 TEST_HEADER include/spdk/util.h 00:04:03.424 TEST_HEADER include/spdk/uuid.h 00:04:03.424 TEST_HEADER include/spdk/version.h 00:04:03.424 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.424 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.424 TEST_HEADER include/spdk/vhost.h 00:04:03.424 TEST_HEADER include/spdk/vmd.h 00:04:03.424 TEST_HEADER include/spdk/xor.h 00:04:03.424 TEST_HEADER include/spdk/zipf.h 00:04:03.424 CXX test/cpp_headers/accel_module.o 00:04:03.424 CXX test/cpp_headers/accel.o 00:04:03.424 CXX test/cpp_headers/assert.o 00:04:03.424 CXX test/cpp_headers/barrier.o 00:04:03.424 CXX test/cpp_headers/base64.o 00:04:03.424 CXX test/cpp_headers/bdev.o 00:04:03.424 CC app/spdk_dd/spdk_dd.o 00:04:03.424 CXX test/cpp_headers/bdev_module.o 00:04:03.424 CXX test/cpp_headers/bdev_zone.o 00:04:03.424 CXX test/cpp_headers/bit_array.o 00:04:03.424 CXX test/cpp_headers/bit_pool.o 00:04:03.424 CXX test/cpp_headers/blob_bdev.o 00:04:03.424 CXX test/cpp_headers/blobfs_bdev.o 00:04:03.424 CXX test/cpp_headers/blobfs.o 00:04:03.424 CXX test/cpp_headers/blob.o 00:04:03.424 CC app/iscsi_tgt/iscsi_tgt.o 00:04:03.424 CXX test/cpp_headers/conf.o 00:04:03.424 CXX test/cpp_headers/config.o 00:04:03.424 CXX test/cpp_headers/cpuset.o 00:04:03.424 CXX test/cpp_headers/crc16.o 00:04:03.424 CC app/nvmf_tgt/nvmf_main.o 00:04:03.424 CXX test/cpp_headers/crc32.o 00:04:03.424 CC examples/ioat/perf/perf.o 00:04:03.424 CC test/app/jsoncat/jsoncat.o 00:04:03.424 CC examples/ioat/verify/verify.o 00:04:03.424 CC test/thread/poller_perf/poller_perf.o 00:04:03.424 CC examples/util/zipf/zipf.o 00:04:03.424 CC test/app/stub/stub.o 00:04:03.424 CC test/app/histogram_perf/histogram_perf.o 00:04:03.424 CC app/spdk_tgt/spdk_tgt.o 00:04:03.424 CC test/env/pci/pci_ut.o 00:04:03.424 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.424 CC test/env/memory/memory_ut.o 00:04:03.424 CC app/fio/nvme/fio_plugin.o 00:04:03.424 CC test/env/vtophys/vtophys.o 00:04:03.424 CC test/app/bdev_svc/bdev_svc.o 00:04:03.424 CC test/dma/test_dma/test_dma.o 00:04:03.424 CC app/fio/bdev/fio_plugin.o 00:04:03.695 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.695 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.695 LINK spdk_lspci 00:04:03.695 LINK spdk_nvme_discover 00:04:03.695 LINK rpc_client_test 00:04:03.695 LINK interrupt_tgt 00:04:03.695 LINK jsoncat 00:04:03.695 LINK poller_perf 00:04:03.695 CXX test/cpp_headers/crc64.o 00:04:03.957 LINK histogram_perf 00:04:03.957 LINK zipf 00:04:03.957 CXX test/cpp_headers/dif.o 00:04:03.957 CXX test/cpp_headers/dma.o 00:04:03.957 LINK vtophys 00:04:03.957 LINK nvmf_tgt 00:04:03.957 LINK env_dpdk_post_init 00:04:03.957 CXX test/cpp_headers/endian.o 00:04:03.957 CXX test/cpp_headers/env_dpdk.o 00:04:03.957 CXX test/cpp_headers/env.o 00:04:03.957 CXX test/cpp_headers/event.o 00:04:03.957 CXX test/cpp_headers/fd_group.o 00:04:03.957 CXX test/cpp_headers/fd.o 00:04:03.957 LINK spdk_trace_record 00:04:03.957 CXX test/cpp_headers/file.o 00:04:03.957 CXX test/cpp_headers/ftl.o 00:04:03.957 CXX test/cpp_headers/gpt_spec.o 00:04:03.957 LINK stub 00:04:03.957 CXX test/cpp_headers/hexlify.o 00:04:03.957 LINK iscsi_tgt 00:04:03.957 CXX test/cpp_headers/histogram_data.o 00:04:03.957 LINK verify 00:04:03.957 LINK ioat_perf 00:04:03.957 LINK bdev_svc 00:04:03.957 CXX test/cpp_headers/idxd.o 00:04:03.957 LINK spdk_tgt 00:04:03.957 CXX test/cpp_headers/idxd_spec.o 00:04:03.957 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.957 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.957 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:03.957 LINK mem_callbacks 00:04:03.957 CXX test/cpp_headers/init.o 00:04:03.957 CXX test/cpp_headers/ioat.o 00:04:04.220 CXX test/cpp_headers/ioat_spec.o 00:04:04.220 CXX test/cpp_headers/iscsi_spec.o 00:04:04.220 CXX test/cpp_headers/json.o 00:04:04.220 CXX test/cpp_headers/jsonrpc.o 00:04:04.220 CXX test/cpp_headers/keyring.o 00:04:04.220 LINK spdk_dd 00:04:04.220 CXX test/cpp_headers/keyring_module.o 00:04:04.220 CXX test/cpp_headers/likely.o 00:04:04.220 CXX test/cpp_headers/log.o 00:04:04.220 CXX test/cpp_headers/lvol.o 00:04:04.220 CXX test/cpp_headers/memory.o 00:04:04.220 CXX test/cpp_headers/mmio.o 00:04:04.220 CXX test/cpp_headers/nbd.o 00:04:04.220 CXX test/cpp_headers/notify.o 00:04:04.220 LINK pci_ut 00:04:04.220 CXX test/cpp_headers/nvme.o 00:04:04.220 CXX test/cpp_headers/nvme_intel.o 00:04:04.220 CXX test/cpp_headers/nvme_ocssd.o 00:04:04.220 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:04.220 CXX test/cpp_headers/nvme_spec.o 00:04:04.220 LINK spdk_trace 00:04:04.220 LINK test_dma 00:04:04.220 CXX test/cpp_headers/nvme_zns.o 00:04:04.220 CXX test/cpp_headers/nvmf_cmd.o 00:04:04.220 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:04.220 CXX test/cpp_headers/nvmf.o 00:04:04.220 CXX test/cpp_headers/nvmf_spec.o 00:04:04.489 CXX test/cpp_headers/nvmf_transport.o 00:04:04.489 CXX test/cpp_headers/opal.o 00:04:04.489 CC test/event/event_perf/event_perf.o 00:04:04.489 CC test/event/reactor_perf/reactor_perf.o 00:04:04.489 CC test/event/reactor/reactor.o 00:04:04.489 CXX test/cpp_headers/opal_spec.o 00:04:04.489 CXX test/cpp_headers/pci_ids.o 00:04:04.489 CXX test/cpp_headers/pipe.o 00:04:04.489 CC test/event/app_repeat/app_repeat.o 00:04:04.489 LINK nvme_fuzz 00:04:04.489 CXX test/cpp_headers/queue.o 00:04:04.489 CC examples/thread/thread/thread_ex.o 00:04:04.489 CXX test/cpp_headers/reduce.o 00:04:04.489 CXX test/cpp_headers/rpc.o 00:04:04.489 CXX test/cpp_headers/scheduler.o 00:04:04.489 CC examples/vmd/lsvmd/lsvmd.o 00:04:04.489 CXX test/cpp_headers/scsi.o 00:04:04.489 CC examples/vmd/led/led.o 00:04:04.489 CC examples/idxd/perf/perf.o 00:04:04.489 CC test/event/scheduler/scheduler.o 00:04:04.489 CXX test/cpp_headers/scsi_spec.o 00:04:04.489 CXX test/cpp_headers/sock.o 00:04:04.489 CC examples/sock/hello_world/hello_sock.o 00:04:04.749 CXX test/cpp_headers/stdinc.o 00:04:04.749 CXX test/cpp_headers/string.o 00:04:04.749 CXX test/cpp_headers/thread.o 00:04:04.749 CXX test/cpp_headers/trace.o 00:04:04.749 CXX test/cpp_headers/trace_parser.o 00:04:04.749 CXX test/cpp_headers/tree.o 00:04:04.749 CXX test/cpp_headers/ublk.o 00:04:04.749 CXX test/cpp_headers/util.o 00:04:04.749 CXX test/cpp_headers/uuid.o 00:04:04.749 CXX test/cpp_headers/version.o 00:04:04.749 LINK event_perf 00:04:04.749 LINK reactor_perf 00:04:04.749 LINK reactor 00:04:04.749 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.749 LINK spdk_bdev 00:04:04.749 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.749 CXX test/cpp_headers/vhost.o 00:04:04.749 CXX test/cpp_headers/vmd.o 00:04:04.749 CXX test/cpp_headers/xor.o 00:04:04.749 CXX test/cpp_headers/zipf.o 00:04:04.749 LINK vhost_fuzz 00:04:04.749 LINK app_repeat 00:04:04.749 LINK lsvmd 00:04:05.010 LINK spdk_nvme_perf 00:04:05.010 LINK spdk_nvme 00:04:05.010 LINK led 00:04:05.010 LINK memory_ut 00:04:05.010 CC app/vhost/vhost.o 00:04:05.010 LINK spdk_nvme_identify 00:04:05.010 LINK spdk_top 00:04:05.010 LINK thread 00:04:05.010 LINK scheduler 00:04:05.010 LINK hello_sock 00:04:05.010 CC test/nvme/e2edp/nvme_dp.o 00:04:05.010 CC test/nvme/err_injection/err_injection.o 00:04:05.010 CC test/nvme/aer/aer.o 00:04:05.010 CC test/nvme/sgl/sgl.o 00:04:05.010 CC test/nvme/reset/reset.o 00:04:05.010 CC test/accel/dif/dif.o 00:04:05.010 CC test/nvme/startup/startup.o 00:04:05.010 CC test/nvme/overhead/overhead.o 00:04:05.010 CC test/nvme/simple_copy/simple_copy.o 00:04:05.010 CC test/nvme/connect_stress/connect_stress.o 00:04:05.010 CC test/nvme/reserve/reserve.o 00:04:05.010 CC test/nvme/boot_partition/boot_partition.o 00:04:05.010 CC test/blobfs/mkfs/mkfs.o 00:04:05.010 CC test/nvme/compliance/nvme_compliance.o 00:04:05.010 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.010 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.269 CC test/nvme/fdp/fdp.o 00:04:05.269 CC test/nvme/cuse/cuse.o 00:04:05.269 CC test/lvol/esnap/esnap.o 00:04:05.269 LINK idxd_perf 00:04:05.269 LINK vhost 00:04:05.269 LINK startup 00:04:05.269 LINK boot_partition 00:04:05.269 LINK fused_ordering 00:04:05.269 LINK err_injection 00:04:05.269 LINK mkfs 00:04:05.269 LINK simple_copy 00:04:05.269 LINK connect_stress 00:04:05.529 LINK doorbell_aers 00:04:05.529 LINK nvme_dp 00:04:05.529 LINK reset 00:04:05.529 LINK reserve 00:04:05.529 LINK sgl 00:04:05.529 LINK aer 00:04:05.529 LINK overhead 00:04:05.529 CC examples/nvme/reconnect/reconnect.o 00:04:05.529 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:05.529 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:05.529 CC examples/nvme/abort/abort.o 00:04:05.529 CC examples/nvme/hotplug/hotplug.o 00:04:05.529 CC examples/nvme/arbitration/arbitration.o 00:04:05.529 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:05.529 CC examples/nvme/hello_world/hello_world.o 00:04:05.529 LINK nvme_compliance 00:04:05.529 CC examples/accel/perf/accel_perf.o 00:04:05.529 LINK fdp 00:04:05.529 CC examples/blob/hello_world/hello_blob.o 00:04:05.529 CC examples/blob/cli/blobcli.o 00:04:05.788 LINK dif 00:04:05.788 LINK cmb_copy 00:04:05.788 LINK pmr_persistence 00:04:05.788 LINK hello_world 00:04:05.788 LINK hotplug 00:04:05.788 LINK arbitration 00:04:05.788 LINK reconnect 00:04:06.047 LINK hello_blob 00:04:06.047 LINK abort 00:04:06.047 LINK nvme_manage 00:04:06.047 LINK accel_perf 00:04:06.047 CC test/bdev/bdevio/bdevio.o 00:04:06.047 LINK blobcli 00:04:06.615 CC examples/bdev/hello_world/hello_bdev.o 00:04:06.615 CC examples/bdev/bdevperf/bdevperf.o 00:04:06.615 LINK iscsi_fuzz 00:04:06.615 LINK bdevio 00:04:06.615 LINK hello_bdev 00:04:06.615 LINK cuse 00:04:07.183 LINK bdevperf 00:04:07.752 CC examples/nvmf/nvmf/nvmf.o 00:04:07.752 LINK nvmf 00:04:10.288 LINK esnap 00:04:10.549 00:04:10.549 real 0m40.722s 00:04:10.549 user 7m21.705s 00:04:10.549 sys 1m50.322s 00:04:10.549 10:50:24 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:10.549 10:50:24 make -- common/autotest_common.sh@10 -- $ set +x 00:04:10.549 ************************************ 00:04:10.549 END TEST make 00:04:10.549 ************************************ 00:04:10.549 10:50:24 -- common/autotest_common.sh@1142 -- $ return 0 00:04:10.549 10:50:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:10.549 10:50:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:10.549 10:50:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:10.549 10:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:10.549 10:50:24 -- pm/common@44 -- $ pid=9949 00:04:10.549 10:50:24 -- pm/common@50 -- $ kill -TERM 9949 00:04:10.549 10:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:10.549 10:50:24 -- pm/common@44 -- $ pid=9951 00:04:10.549 10:50:24 -- pm/common@50 -- $ kill -TERM 9951 00:04:10.549 10:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:10.549 10:50:24 -- pm/common@44 -- $ pid=9953 00:04:10.549 10:50:24 -- pm/common@50 -- $ kill -TERM 9953 00:04:10.549 10:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:10.549 10:50:24 -- pm/common@44 -- $ pid=9982 00:04:10.549 10:50:24 -- pm/common@50 -- $ sudo -E kill -TERM 9982 00:04:10.549 10:50:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.549 10:50:24 -- nvmf/common.sh@7 -- # uname -s 00:04:10.549 10:50:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.549 10:50:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.549 10:50:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.549 10:50:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.549 10:50:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.549 10:50:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.549 10:50:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.549 10:50:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.549 10:50:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.549 10:50:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.549 10:50:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.549 10:50:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:10.549 10:50:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.549 10:50:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.549 10:50:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:10.549 10:50:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.549 10:50:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:10.549 10:50:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.549 10:50:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.549 10:50:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.549 10:50:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.549 10:50:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.549 10:50:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.549 10:50:24 -- paths/export.sh@5 -- # export PATH 00:04:10.549 10:50:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.549 10:50:24 -- nvmf/common.sh@47 -- # : 0 00:04:10.549 10:50:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:10.549 10:50:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:10.549 10:50:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.549 10:50:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.549 10:50:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.549 10:50:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:10.549 10:50:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:10.549 10:50:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:10.549 10:50:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:10.549 10:50:24 -- spdk/autotest.sh@32 -- # uname -s 00:04:10.549 10:50:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:10.549 10:50:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:10.549 10:50:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:10.549 10:50:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:10.549 10:50:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:10.549 10:50:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:10.549 10:50:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:10.549 10:50:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:10.549 10:50:24 -- spdk/autotest.sh@48 -- # udevadm_pid=85682 00:04:10.549 10:50:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:10.549 10:50:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:10.549 10:50:24 -- pm/common@17 -- # local monitor 00:04:10.549 10:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@21 -- # date +%s 00:04:10.549 10:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.549 10:50:24 -- pm/common@21 -- # date +%s 00:04:10.549 10:50:24 -- pm/common@25 -- # sleep 1 00:04:10.549 10:50:24 -- pm/common@21 -- # date +%s 00:04:10.549 10:50:24 -- pm/common@21 -- # date +%s 00:04:10.549 10:50:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720687824 00:04:10.549 10:50:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720687824 00:04:10.549 10:50:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720687824 00:04:10.549 10:50:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720687824 00:04:10.809 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720687824_collect-vmstat.pm.log 00:04:10.809 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720687824_collect-cpu-load.pm.log 00:04:10.809 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720687824_collect-cpu-temp.pm.log 00:04:10.809 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720687824_collect-bmc-pm.bmc.pm.log 00:04:11.750 10:50:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:11.750 10:50:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:11.750 10:50:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.750 10:50:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.750 10:50:25 -- spdk/autotest.sh@59 -- # create_test_list 00:04:11.750 10:50:25 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:11.750 10:50:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.750 10:50:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:11.750 10:50:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.750 10:50:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.750 10:50:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:11.750 10:50:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.750 10:50:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:11.750 10:50:26 -- common/autotest_common.sh@1455 -- # uname 00:04:11.750 10:50:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:11.750 10:50:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:11.750 10:50:26 -- common/autotest_common.sh@1475 -- # uname 00:04:11.750 10:50:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:11.750 10:50:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:11.750 10:50:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:11.750 10:50:26 -- spdk/autotest.sh@72 -- # hash lcov 00:04:11.750 10:50:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:11.750 10:50:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:11.750 --rc lcov_branch_coverage=1 00:04:11.750 --rc lcov_function_coverage=1 00:04:11.750 --rc genhtml_branch_coverage=1 00:04:11.750 --rc genhtml_function_coverage=1 00:04:11.750 --rc genhtml_legend=1 00:04:11.750 --rc geninfo_all_blocks=1 00:04:11.750 ' 00:04:11.750 10:50:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:11.750 --rc lcov_branch_coverage=1 00:04:11.750 --rc lcov_function_coverage=1 00:04:11.750 --rc genhtml_branch_coverage=1 00:04:11.750 --rc genhtml_function_coverage=1 00:04:11.750 --rc genhtml_legend=1 00:04:11.750 --rc geninfo_all_blocks=1 00:04:11.750 ' 00:04:11.750 10:50:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:11.750 --rc lcov_branch_coverage=1 00:04:11.750 --rc lcov_function_coverage=1 00:04:11.750 --rc genhtml_branch_coverage=1 00:04:11.750 --rc genhtml_function_coverage=1 00:04:11.750 --rc genhtml_legend=1 00:04:11.750 --rc geninfo_all_blocks=1 00:04:11.750 --no-external' 00:04:11.750 10:50:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:11.750 --rc lcov_branch_coverage=1 00:04:11.750 --rc lcov_function_coverage=1 00:04:11.750 --rc genhtml_branch_coverage=1 00:04:11.750 --rc genhtml_function_coverage=1 00:04:11.750 --rc genhtml_legend=1 00:04:11.750 --rc geninfo_all_blocks=1 00:04:11.750 --no-external' 00:04:11.750 10:50:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:11.750 lcov: LCOV version 1.14 00:04:11.750 10:50:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:13.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:13.659 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:13.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:13.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:13.661 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:13.661 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:13.661 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:13.661 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:13.661 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:13.661 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:13.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:13.921 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:28.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:28.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.947 10:51:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:46.947 10:51:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.947 10:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.947 10:51:00 -- spdk/autotest.sh@91 -- # rm -f 00:04:46.947 10:51:00 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.947 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:46.947 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:46.947 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:46.947 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:46.947 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:46.947 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:46.947 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:46.947 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:46.947 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:46.947 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:46.947 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:46.947 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:46.947 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:46.947 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:46.947 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:46.947 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:46.947 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:47.208 10:51:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:47.208 10:51:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:47.208 10:51:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:47.208 10:51:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:47.208 10:51:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.208 10:51:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:47.208 10:51:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:47.208 10:51:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.208 10:51:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.208 10:51:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:47.208 10:51:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.208 10:51:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:47.208 10:51:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:47.208 10:51:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:47.208 10:51:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:47.208 No valid GPT data, bailing 00:04:47.208 10:51:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.208 10:51:01 -- scripts/common.sh@391 -- # pt= 00:04:47.208 10:51:01 -- scripts/common.sh@392 -- # return 1 00:04:47.208 10:51:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:47.208 1+0 records in 00:04:47.209 1+0 records out 00:04:47.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00241398 s, 434 MB/s 00:04:47.209 10:51:01 -- spdk/autotest.sh@118 -- # sync 00:04:47.209 10:51:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:47.209 10:51:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:47.209 10:51:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.114 10:51:03 -- spdk/autotest.sh@124 -- # uname -s 00:04:49.114 10:51:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:49.114 10:51:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:49.114 10:51:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.114 10:51:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.114 10:51:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.114 ************************************ 00:04:49.114 START TEST setup.sh 00:04:49.114 ************************************ 00:04:49.114 10:51:03 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:49.114 * Looking for test storage... 00:04:49.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.114 10:51:03 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:49.114 10:51:03 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:49.114 10:51:03 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:49.114 10:51:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.114 10:51:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.114 10:51:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.114 ************************************ 00:04:49.114 START TEST acl 00:04:49.114 ************************************ 00:04:49.114 10:51:03 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:49.373 * Looking for test storage... 00:04:49.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.373 10:51:03 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:49.373 10:51:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:49.373 10:51:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:49.373 10:51:03 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:49.373 10:51:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.374 10:51:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:49.374 10:51:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:49.374 10:51:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.374 10:51:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.374 10:51:03 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:49.374 10:51:03 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:49.374 10:51:03 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:49.374 10:51:03 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:49.374 10:51:03 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:49.374 10:51:03 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.374 10:51:03 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.757 10:51:05 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:50.757 10:51:05 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:50.757 10:51:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.757 10:51:05 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:50.757 10:51:05 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.757 10:51:05 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:52.136 Hugepages 00:04:52.136 node hugesize free / total 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.136 00:04:52.136 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.136 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:52.137 10:51:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:52.137 10:51:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.137 10:51:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.137 10:51:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:52.137 ************************************ 00:04:52.137 START TEST denied 00:04:52.137 ************************************ 00:04:52.137 10:51:06 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:52.137 10:51:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:52.137 10:51:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:52.137 10:51:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:52.137 10:51:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.137 10:51:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.046 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.046 10:51:08 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.583 00:04:56.583 real 0m4.053s 00:04:56.583 user 0m1.131s 00:04:56.583 sys 0m1.926s 00:04:56.583 10:51:10 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.583 10:51:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:56.583 ************************************ 00:04:56.583 END TEST denied 00:04:56.583 ************************************ 00:04:56.583 10:51:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:56.583 10:51:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:56.583 10:51:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.583 10:51:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.583 10:51:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:56.583 ************************************ 00:04:56.583 START TEST allowed 00:04:56.583 ************************************ 00:04:56.583 10:51:10 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:56.583 10:51:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:56.583 10:51:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:56.583 10:51:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.583 10:51:10 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:56.583 10:51:10 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.120 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.120 10:51:12 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:59.120 10:51:12 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:59.120 10:51:12 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:59.120 10:51:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.120 10:51:12 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.499 00:05:00.499 real 0m4.065s 00:05:00.499 user 0m1.020s 00:05:00.499 sys 0m1.861s 00:05:00.499 10:51:14 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.499 10:51:14 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:00.499 ************************************ 00:05:00.499 END TEST allowed 00:05:00.499 ************************************ 00:05:00.499 10:51:14 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:00.499 00:05:00.499 real 0m11.111s 00:05:00.499 user 0m3.311s 00:05:00.499 sys 0m5.690s 00:05:00.499 10:51:14 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.499 10:51:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:00.499 ************************************ 00:05:00.499 END TEST acl 00:05:00.499 ************************************ 00:05:00.499 10:51:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:00.499 10:51:14 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:00.499 10:51:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.499 10:51:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.499 10:51:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:00.499 ************************************ 00:05:00.499 START TEST hugepages 00:05:00.499 ************************************ 00:05:00.499 10:51:14 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:00.499 * Looking for test storage... 00:05:00.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 42712440 kB' 'MemAvailable: 46112916 kB' 'Buffers: 9388 kB' 'Cached: 11033620 kB' 'SwapCached: 0 kB' 'Active: 8425052 kB' 'Inactive: 3432808 kB' 'Active(anon): 8050972 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 817700 kB' 'Mapped: 145864 kB' 'Shmem: 7236120 kB' 'KReclaimable: 174420 kB' 'Slab: 463712 kB' 'SReclaimable: 174420 kB' 'SUnreclaim: 289292 kB' 'KernelStack: 12672 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 9651596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192948 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.500 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.501 10:51:14 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:00.501 10:51:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.501 10:51:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.501 10:51:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.501 ************************************ 00:05:00.501 START TEST default_setup 00:05:00.501 ************************************ 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.501 10:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.885 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.885 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:01.885 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.832 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44832716 kB' 'MemAvailable: 48232988 kB' 'Buffers: 9388 kB' 'Cached: 11033704 kB' 'SwapCached: 0 kB' 'Active: 8443204 kB' 'Inactive: 3432808 kB' 'Active(anon): 8069124 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 836136 kB' 'Mapped: 144676 kB' 'Shmem: 7236204 kB' 'KReclaimable: 174012 kB' 'Slab: 462300 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288288 kB' 'KernelStack: 12560 kB' 'PageTables: 7400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9634108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193076 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.832 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.833 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44832660 kB' 'MemAvailable: 48232932 kB' 'Buffers: 9388 kB' 'Cached: 11033704 kB' 'SwapCached: 0 kB' 'Active: 8443652 kB' 'Inactive: 3432808 kB' 'Active(anon): 8069572 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 836672 kB' 'Mapped: 144600 kB' 'Shmem: 7236204 kB' 'KReclaimable: 174012 kB' 'Slab: 462316 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288304 kB' 'KernelStack: 12640 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9634124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193028 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.834 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.835 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44831884 kB' 'MemAvailable: 48232156 kB' 'Buffers: 9388 kB' 'Cached: 11033724 kB' 'SwapCached: 0 kB' 'Active: 8443020 kB' 'Inactive: 3432808 kB' 'Active(anon): 8068940 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 835980 kB' 'Mapped: 144600 kB' 'Shmem: 7236224 kB' 'KReclaimable: 174012 kB' 'Slab: 462300 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288288 kB' 'KernelStack: 12608 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9634148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193028 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.836 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.837 nr_hugepages=1024 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.837 resv_hugepages=0 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.837 surplus_hugepages=0 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.837 anon_hugepages=0 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.837 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44831244 kB' 'MemAvailable: 48231516 kB' 'Buffers: 9388 kB' 'Cached: 11033744 kB' 'SwapCached: 0 kB' 'Active: 8443352 kB' 'Inactive: 3432808 kB' 'Active(anon): 8069272 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 836300 kB' 'Mapped: 144600 kB' 'Shmem: 7236244 kB' 'KReclaimable: 174012 kB' 'Slab: 462320 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288308 kB' 'KernelStack: 12608 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9634168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193028 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.838 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26429072 kB' 'MemUsed: 6400812 kB' 'SwapCached: 0 kB' 'Active: 3252700 kB' 'Inactive: 85740 kB' 'Active(anon): 3090900 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688648 kB' 'Mapped: 65612 kB' 'AnonPages: 652980 kB' 'Shmem: 2441108 kB' 'KernelStack: 7464 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120716 kB' 'Slab: 314384 kB' 'SReclaimable: 120716 kB' 'SUnreclaim: 193668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.839 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:02.840 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.100 node0=1024 expecting 1024 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.100 00:05:03.100 real 0m2.443s 00:05:03.100 user 0m0.658s 00:05:03.100 sys 0m0.903s 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.100 10:51:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:03.100 ************************************ 00:05:03.100 END TEST default_setup 00:05:03.100 ************************************ 00:05:03.100 10:51:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.100 10:51:17 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:03.100 10:51:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.100 10:51:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.100 10:51:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.100 ************************************ 00:05:03.100 START TEST per_node_1G_alloc 00:05:03.100 ************************************ 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.100 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.101 10:51:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.038 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.038 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.038 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.038 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.038 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.038 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.038 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.038 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.038 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.038 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.038 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.038 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.301 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.301 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.301 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.301 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.301 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.301 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44807944 kB' 'MemAvailable: 48208216 kB' 'Buffers: 9388 kB' 'Cached: 11033832 kB' 'SwapCached: 0 kB' 'Active: 8451872 kB' 'Inactive: 3432808 kB' 'Active(anon): 8077792 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 844768 kB' 'Mapped: 145076 kB' 'Shmem: 7236332 kB' 'KReclaimable: 174012 kB' 'Slab: 462676 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288664 kB' 'KernelStack: 12640 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9640484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193016 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.302 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44810908 kB' 'MemAvailable: 48211180 kB' 'Buffers: 9388 kB' 'Cached: 11033832 kB' 'SwapCached: 0 kB' 'Active: 8451248 kB' 'Inactive: 3432808 kB' 'Active(anon): 8077168 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 844064 kB' 'Mapped: 145448 kB' 'Shmem: 7236332 kB' 'KReclaimable: 174012 kB' 'Slab: 462752 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288740 kB' 'KernelStack: 12592 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9640500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192984 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.303 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.304 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44810664 kB' 'MemAvailable: 48210936 kB' 'Buffers: 9388 kB' 'Cached: 11033852 kB' 'SwapCached: 0 kB' 'Active: 8448716 kB' 'Inactive: 3432808 kB' 'Active(anon): 8074636 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 841528 kB' 'Mapped: 145052 kB' 'Shmem: 7236352 kB' 'KReclaimable: 174012 kB' 'Slab: 462752 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288740 kB' 'KernelStack: 12560 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9638004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192932 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.305 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.306 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.307 nr_hugepages=1024 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.307 resv_hugepages=0 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.307 surplus_hugepages=0 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.307 anon_hugepages=0 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44807640 kB' 'MemAvailable: 48207912 kB' 'Buffers: 9388 kB' 'Cached: 11033876 kB' 'SwapCached: 0 kB' 'Active: 8451480 kB' 'Inactive: 3432808 kB' 'Active(anon): 8077400 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 844288 kB' 'Mapped: 145388 kB' 'Shmem: 7236376 kB' 'KReclaimable: 174012 kB' 'Slab: 462752 kB' 'SReclaimable: 174012 kB' 'SUnreclaim: 288740 kB' 'KernelStack: 12576 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9640548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192952 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.307 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.570 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.571 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27454756 kB' 'MemUsed: 5375128 kB' 'SwapCached: 0 kB' 'Active: 3255632 kB' 'Inactive: 85740 kB' 'Active(anon): 3093832 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688720 kB' 'Mapped: 65628 kB' 'AnonPages: 655908 kB' 'Shmem: 2441180 kB' 'KernelStack: 7496 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120716 kB' 'Slab: 314552 kB' 'SReclaimable: 120716 kB' 'SUnreclaim: 193836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.572 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 17353292 kB' 'MemUsed: 10358548 kB' 'SwapCached: 0 kB' 'Active: 5190756 kB' 'Inactive: 3347068 kB' 'Active(anon): 4978476 kB' 'Inactive(anon): 0 kB' 'Active(file): 212280 kB' 'Inactive(file): 3347068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354564 kB' 'Mapped: 78988 kB' 'AnonPages: 183368 kB' 'Shmem: 4795216 kB' 'KernelStack: 5112 kB' 'PageTables: 3216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53296 kB' 'Slab: 148200 kB' 'SReclaimable: 53296 kB' 'SUnreclaim: 94904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.573 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.574 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:04.575 node0=512 expecting 512 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:04.575 node1=512 expecting 512 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:04.575 00:05:04.575 real 0m1.483s 00:05:04.575 user 0m0.635s 00:05:04.575 sys 0m0.811s 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.575 10:51:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.575 ************************************ 00:05:04.575 END TEST per_node_1G_alloc 00:05:04.575 ************************************ 00:05:04.575 10:51:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:04.575 10:51:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:04.575 10:51:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.575 10:51:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.575 10:51:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.575 ************************************ 00:05:04.575 START TEST even_2G_alloc 00:05:04.575 ************************************ 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.575 10:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.962 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.962 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:05.962 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.962 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.962 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.962 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.962 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.962 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.962 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.962 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.962 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.962 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.962 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.962 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.962 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.962 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.962 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44825396 kB' 'MemAvailable: 48225924 kB' 'Buffers: 9388 kB' 'Cached: 11033960 kB' 'SwapCached: 0 kB' 'Active: 8446168 kB' 'Inactive: 3432808 kB' 'Active(anon): 8072088 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 838940 kB' 'Mapped: 143868 kB' 'Shmem: 7236460 kB' 'KReclaimable: 174524 kB' 'Slab: 463204 kB' 'SReclaimable: 174524 kB' 'SUnreclaim: 288680 kB' 'KernelStack: 12544 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9621388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192932 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.962 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44825808 kB' 'MemAvailable: 48226336 kB' 'Buffers: 9388 kB' 'Cached: 11033964 kB' 'SwapCached: 0 kB' 'Active: 8445204 kB' 'Inactive: 3432808 kB' 'Active(anon): 8071124 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 838344 kB' 'Mapped: 143752 kB' 'Shmem: 7236464 kB' 'KReclaimable: 174524 kB' 'Slab: 463236 kB' 'SReclaimable: 174524 kB' 'SUnreclaim: 288712 kB' 'KernelStack: 12560 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9621404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192900 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.963 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.964 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44826524 kB' 'MemAvailable: 48227052 kB' 'Buffers: 9388 kB' 'Cached: 11033980 kB' 'SwapCached: 0 kB' 'Active: 8445216 kB' 'Inactive: 3432808 kB' 'Active(anon): 8071136 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 838312 kB' 'Mapped: 143752 kB' 'Shmem: 7236480 kB' 'KReclaimable: 174524 kB' 'Slab: 463236 kB' 'SReclaimable: 174524 kB' 'SUnreclaim: 288712 kB' 'KernelStack: 12544 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9621428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192900 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.965 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.966 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.967 nr_hugepages=1024 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.967 resv_hugepages=0 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.967 surplus_hugepages=0 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.967 anon_hugepages=0 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44825768 kB' 'MemAvailable: 48226292 kB' 'Buffers: 9388 kB' 'Cached: 11034000 kB' 'SwapCached: 0 kB' 'Active: 8445264 kB' 'Inactive: 3432808 kB' 'Active(anon): 8071184 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 838380 kB' 'Mapped: 143752 kB' 'Shmem: 7236500 kB' 'KReclaimable: 174516 kB' 'Slab: 463228 kB' 'SReclaimable: 174516 kB' 'SUnreclaim: 288712 kB' 'KernelStack: 12576 kB' 'PageTables: 7508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9621448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192900 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.967 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.968 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27461344 kB' 'MemUsed: 5368540 kB' 'SwapCached: 0 kB' 'Active: 3255572 kB' 'Inactive: 85740 kB' 'Active(anon): 3093772 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688716 kB' 'Mapped: 65148 kB' 'AnonPages: 655944 kB' 'Shmem: 2441176 kB' 'KernelStack: 7464 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121220 kB' 'Slab: 314852 kB' 'SReclaimable: 121220 kB' 'SUnreclaim: 193632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.969 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 17364728 kB' 'MemUsed: 10347112 kB' 'SwapCached: 0 kB' 'Active: 5190368 kB' 'Inactive: 3347068 kB' 'Active(anon): 4978088 kB' 'Inactive(anon): 0 kB' 'Active(file): 212280 kB' 'Inactive(file): 3347068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354716 kB' 'Mapped: 78604 kB' 'AnonPages: 183092 kB' 'Shmem: 4795368 kB' 'KernelStack: 5144 kB' 'PageTables: 3144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53296 kB' 'Slab: 148376 kB' 'SReclaimable: 53296 kB' 'SUnreclaim: 95080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.970 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.231 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.232 node0=512 expecting 512 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:06.232 node1=512 expecting 512 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:06.232 00:05:06.232 real 0m1.567s 00:05:06.232 user 0m0.638s 00:05:06.232 sys 0m0.859s 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.232 10:51:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:06.232 ************************************ 00:05:06.232 END TEST even_2G_alloc 00:05:06.232 ************************************ 00:05:06.232 10:51:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:06.232 10:51:20 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:06.232 10:51:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.232 10:51:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.232 10:51:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:06.232 ************************************ 00:05:06.232 START TEST odd_alloc 00:05:06.232 ************************************ 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.232 10:51:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.613 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.613 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:07.613 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.613 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.613 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.613 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.613 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.613 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.614 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.614 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.614 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.614 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.614 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.614 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.614 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.614 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.614 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44815912 kB' 'MemAvailable: 48216348 kB' 'Buffers: 9388 kB' 'Cached: 11034100 kB' 'SwapCached: 0 kB' 'Active: 8449152 kB' 'Inactive: 3432808 kB' 'Active(anon): 8075072 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 841300 kB' 'Mapped: 143820 kB' 'Shmem: 7236600 kB' 'KReclaimable: 174340 kB' 'Slab: 462956 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288616 kB' 'KernelStack: 12608 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9621660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193012 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.614 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.615 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44817328 kB' 'MemAvailable: 48217764 kB' 'Buffers: 9388 kB' 'Cached: 11034104 kB' 'SwapCached: 0 kB' 'Active: 8448784 kB' 'Inactive: 3432808 kB' 'Active(anon): 8074704 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 841412 kB' 'Mapped: 143872 kB' 'Shmem: 7236604 kB' 'KReclaimable: 174340 kB' 'Slab: 463048 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288708 kB' 'KernelStack: 12624 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9621676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 192996 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.616 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.617 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44817704 kB' 'MemAvailable: 48218140 kB' 'Buffers: 9388 kB' 'Cached: 11034120 kB' 'SwapCached: 0 kB' 'Active: 8448680 kB' 'Inactive: 3432808 kB' 'Active(anon): 8074600 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 841252 kB' 'Mapped: 143764 kB' 'Shmem: 7236620 kB' 'KReclaimable: 174340 kB' 'Slab: 463032 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288692 kB' 'KernelStack: 12640 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9621696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193012 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.618 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.619 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:07.620 nr_hugepages=1025 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.620 resv_hugepages=0 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.620 surplus_hugepages=0 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.620 anon_hugepages=0 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.620 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44817452 kB' 'MemAvailable: 48217888 kB' 'Buffers: 9388 kB' 'Cached: 11034140 kB' 'SwapCached: 0 kB' 'Active: 8448708 kB' 'Inactive: 3432808 kB' 'Active(anon): 8074628 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 841252 kB' 'Mapped: 143764 kB' 'Shmem: 7236640 kB' 'KReclaimable: 174340 kB' 'Slab: 463032 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288692 kB' 'KernelStack: 12640 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9621720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193012 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.621 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.622 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27456528 kB' 'MemUsed: 5373356 kB' 'SwapCached: 0 kB' 'Active: 3258928 kB' 'Inactive: 85740 kB' 'Active(anon): 3097128 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688788 kB' 'Mapped: 65160 kB' 'AnonPages: 659016 kB' 'Shmem: 2441248 kB' 'KernelStack: 7496 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121044 kB' 'Slab: 314836 kB' 'SReclaimable: 121044 kB' 'SUnreclaim: 193792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.623 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 17361244 kB' 'MemUsed: 10350596 kB' 'SwapCached: 0 kB' 'Active: 5189548 kB' 'Inactive: 3347068 kB' 'Active(anon): 4977268 kB' 'Inactive(anon): 0 kB' 'Active(file): 212280 kB' 'Inactive(file): 3347068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354756 kB' 'Mapped: 78604 kB' 'AnonPages: 181960 kB' 'Shmem: 4795408 kB' 'KernelStack: 5128 kB' 'PageTables: 2896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53296 kB' 'Slab: 148196 kB' 'SReclaimable: 53296 kB' 'SUnreclaim: 94900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.624 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.625 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:07.626 node0=512 expecting 513 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:07.626 node1=513 expecting 512 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:07.626 00:05:07.626 real 0m1.532s 00:05:07.626 user 0m0.637s 00:05:07.626 sys 0m0.860s 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.626 10:51:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.626 ************************************ 00:05:07.626 END TEST odd_alloc 00:05:07.626 ************************************ 00:05:07.626 10:51:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:07.626 10:51:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:07.626 10:51:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.626 10:51:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.626 10:51:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.626 ************************************ 00:05:07.626 START TEST custom_alloc 00:05:07.626 ************************************ 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:07.626 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.885 10:51:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.825 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.825 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.825 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.825 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.825 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.825 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.825 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.825 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.825 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.825 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.825 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.825 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.825 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.090 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.090 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.090 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.090 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43787276 kB' 'MemAvailable: 47187712 kB' 'Buffers: 9388 kB' 'Cached: 11034228 kB' 'SwapCached: 0 kB' 'Active: 8450732 kB' 'Inactive: 3432808 kB' 'Active(anon): 8076652 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843292 kB' 'Mapped: 143840 kB' 'Shmem: 7236728 kB' 'KReclaimable: 174340 kB' 'Slab: 463036 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288696 kB' 'KernelStack: 12656 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9622076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193076 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.090 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.091 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43789908 kB' 'MemAvailable: 47190344 kB' 'Buffers: 9388 kB' 'Cached: 11034228 kB' 'SwapCached: 0 kB' 'Active: 8450816 kB' 'Inactive: 3432808 kB' 'Active(anon): 8076736 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843376 kB' 'Mapped: 143840 kB' 'Shmem: 7236728 kB' 'KReclaimable: 174340 kB' 'Slab: 463020 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288680 kB' 'KernelStack: 12608 kB' 'PageTables: 7696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9622092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193060 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.092 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.093 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43790244 kB' 'MemAvailable: 47190680 kB' 'Buffers: 9388 kB' 'Cached: 11034232 kB' 'SwapCached: 0 kB' 'Active: 8450980 kB' 'Inactive: 3432808 kB' 'Active(anon): 8076900 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843524 kB' 'Mapped: 143784 kB' 'Shmem: 7236732 kB' 'KReclaimable: 174340 kB' 'Slab: 463020 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288680 kB' 'KernelStack: 12640 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9622116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193060 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.094 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.095 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:09.096 nr_hugepages=1536 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.096 resv_hugepages=0 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.096 surplus_hugepages=0 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.096 anon_hugepages=0 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43792340 kB' 'MemAvailable: 47192776 kB' 'Buffers: 9388 kB' 'Cached: 11034264 kB' 'SwapCached: 0 kB' 'Active: 8450756 kB' 'Inactive: 3432808 kB' 'Active(anon): 8076676 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 843268 kB' 'Mapped: 143784 kB' 'Shmem: 7236764 kB' 'KReclaimable: 174340 kB' 'Slab: 463096 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288756 kB' 'KernelStack: 12640 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9622136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193060 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.096 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.372 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.373 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27476052 kB' 'MemUsed: 5353832 kB' 'SwapCached: 0 kB' 'Active: 3261828 kB' 'Inactive: 85740 kB' 'Active(anon): 3100028 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688856 kB' 'Mapped: 65180 kB' 'AnonPages: 661848 kB' 'Shmem: 2441316 kB' 'KernelStack: 7512 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121044 kB' 'Slab: 314964 kB' 'SReclaimable: 121044 kB' 'SUnreclaim: 193920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.374 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 16316104 kB' 'MemUsed: 11395736 kB' 'SwapCached: 0 kB' 'Active: 5189800 kB' 'Inactive: 3347068 kB' 'Active(anon): 4977520 kB' 'Inactive(anon): 0 kB' 'Active(file): 212280 kB' 'Inactive(file): 3347068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354844 kB' 'Mapped: 78604 kB' 'AnonPages: 182212 kB' 'Shmem: 4795496 kB' 'KernelStack: 5112 kB' 'PageTables: 2796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 53296 kB' 'Slab: 148132 kB' 'SReclaimable: 53296 kB' 'SUnreclaim: 94836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.375 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:09.376 node0=512 expecting 512 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:09.376 node1=1024 expecting 1024 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:09.376 00:05:09.376 real 0m1.546s 00:05:09.376 user 0m0.658s 00:05:09.376 sys 0m0.852s 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.376 10:51:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 ************************************ 00:05:09.376 END TEST custom_alloc 00:05:09.376 ************************************ 00:05:09.376 10:51:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:09.376 10:51:23 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:09.376 10:51:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.376 10:51:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.376 10:51:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.376 ************************************ 00:05:09.376 START TEST no_shrink_alloc 00:05:09.376 ************************************ 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.376 10:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.759 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.759 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.759 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.759 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.759 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.759 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.759 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.759 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.759 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.759 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.759 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.759 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.759 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.759 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.759 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.759 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.759 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.759 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44817188 kB' 'MemAvailable: 48217624 kB' 'Buffers: 9388 kB' 'Cached: 11034356 kB' 'SwapCached: 0 kB' 'Active: 8454288 kB' 'Inactive: 3432808 kB' 'Active(anon): 8080208 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 846704 kB' 'Mapped: 143852 kB' 'Shmem: 7236856 kB' 'KReclaimable: 174340 kB' 'Slab: 463216 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288876 kB' 'KernelStack: 12656 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9624536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193188 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.760 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44818808 kB' 'MemAvailable: 48219244 kB' 'Buffers: 9388 kB' 'Cached: 11034356 kB' 'SwapCached: 0 kB' 'Active: 8454312 kB' 'Inactive: 3432808 kB' 'Active(anon): 8080232 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 846676 kB' 'Mapped: 143796 kB' 'Shmem: 7236856 kB' 'KReclaimable: 174340 kB' 'Slab: 463196 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288856 kB' 'KernelStack: 12960 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9624552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193284 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.761 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.762 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.763 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44818444 kB' 'MemAvailable: 48218880 kB' 'Buffers: 9388 kB' 'Cached: 11034360 kB' 'SwapCached: 0 kB' 'Active: 8454528 kB' 'Inactive: 3432808 kB' 'Active(anon): 8080448 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 847240 kB' 'Mapped: 143856 kB' 'Shmem: 7236860 kB' 'KReclaimable: 174340 kB' 'Slab: 463228 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288888 kB' 'KernelStack: 13056 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9623384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193380 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.764 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.765 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.766 nr_hugepages=1024 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.766 resv_hugepages=0 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.766 surplus_hugepages=0 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.766 anon_hugepages=0 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44814192 kB' 'MemAvailable: 48214628 kB' 'Buffers: 9388 kB' 'Cached: 11034364 kB' 'SwapCached: 0 kB' 'Active: 8455884 kB' 'Inactive: 3432808 kB' 'Active(anon): 8081804 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 848128 kB' 'Mapped: 143856 kB' 'Shmem: 7236864 kB' 'KReclaimable: 174340 kB' 'Slab: 463228 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 288888 kB' 'KernelStack: 13008 kB' 'PageTables: 10076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9624968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193524 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.766 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.767 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26407448 kB' 'MemUsed: 6422436 kB' 'SwapCached: 0 kB' 'Active: 3265072 kB' 'Inactive: 85740 kB' 'Active(anon): 3103272 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688860 kB' 'Mapped: 65304 kB' 'AnonPages: 664744 kB' 'Shmem: 2441320 kB' 'KernelStack: 7640 kB' 'PageTables: 5568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121044 kB' 'Slab: 314948 kB' 'SReclaimable: 121044 kB' 'SUnreclaim: 193904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.768 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.769 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.770 node0=1024 expecting 1024 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.770 10:51:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.154 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.154 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.154 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.154 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.154 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.154 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.154 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.154 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.154 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.154 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:12.154 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:12.154 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:12.154 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:12.154 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:12.154 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:12.154 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:12.154 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:12.154 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44781552 kB' 'MemAvailable: 48181988 kB' 'Buffers: 9388 kB' 'Cached: 11034472 kB' 'SwapCached: 0 kB' 'Active: 8455768 kB' 'Inactive: 3432808 kB' 'Active(anon): 8081688 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 847988 kB' 'Mapped: 143884 kB' 'Shmem: 7236972 kB' 'KReclaimable: 174340 kB' 'Slab: 463516 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 289176 kB' 'KernelStack: 12720 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9622776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193140 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.154 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.155 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44781616 kB' 'MemAvailable: 48182052 kB' 'Buffers: 9388 kB' 'Cached: 11034476 kB' 'SwapCached: 0 kB' 'Active: 8455840 kB' 'Inactive: 3432808 kB' 'Active(anon): 8081760 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 848076 kB' 'Mapped: 143884 kB' 'Shmem: 7236976 kB' 'KReclaimable: 174340 kB' 'Slab: 463484 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 289144 kB' 'KernelStack: 12688 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9622796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193124 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.156 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.157 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44781364 kB' 'MemAvailable: 48181800 kB' 'Buffers: 9388 kB' 'Cached: 11034492 kB' 'SwapCached: 0 kB' 'Active: 8455444 kB' 'Inactive: 3432808 kB' 'Active(anon): 8081364 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 847652 kB' 'Mapped: 143808 kB' 'Shmem: 7236992 kB' 'KReclaimable: 174340 kB' 'Slab: 463532 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 289192 kB' 'KernelStack: 12688 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9622816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193124 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.158 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.159 nr_hugepages=1024 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.159 resv_hugepages=0 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.159 surplus_hugepages=0 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.159 anon_hugepages=0 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.159 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 44782548 kB' 'MemAvailable: 48182984 kB' 'Buffers: 9388 kB' 'Cached: 11034528 kB' 'SwapCached: 0 kB' 'Active: 8455484 kB' 'Inactive: 3432808 kB' 'Active(anon): 8081404 kB' 'Inactive(anon): 0 kB' 'Active(file): 374080 kB' 'Inactive(file): 3432808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 847660 kB' 'Mapped: 143808 kB' 'Shmem: 7237028 kB' 'KReclaimable: 174340 kB' 'Slab: 463532 kB' 'SReclaimable: 174340 kB' 'SUnreclaim: 289192 kB' 'KernelStack: 12688 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9622840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193124 kB' 'VmallocChunk: 0 kB' 'Percpu: 30528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 454236 kB' 'DirectMap2M: 9951232 kB' 'DirectMap1G: 58720256 kB' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.160 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26366340 kB' 'MemUsed: 6463544 kB' 'SwapCached: 0 kB' 'Active: 3264852 kB' 'Inactive: 85740 kB' 'Active(anon): 3103052 kB' 'Inactive(anon): 0 kB' 'Active(file): 161800 kB' 'Inactive(file): 85740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2688860 kB' 'Mapped: 65204 kB' 'AnonPages: 664804 kB' 'Shmem: 2441320 kB' 'KernelStack: 7480 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121044 kB' 'Slab: 315292 kB' 'SReclaimable: 121044 kB' 'SUnreclaim: 194248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.161 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.162 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.163 node0=1024 expecting 1024 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.163 00:05:12.163 real 0m2.874s 00:05:12.163 user 0m1.171s 00:05:12.163 sys 0m1.602s 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.163 10:51:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.163 ************************************ 00:05:12.163 END TEST no_shrink_alloc 00:05:12.163 ************************************ 00:05:12.163 10:51:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.163 10:51:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.163 00:05:12.163 real 0m11.848s 00:05:12.163 user 0m4.566s 00:05:12.163 sys 0m6.144s 00:05:12.163 10:51:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.163 10:51:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.163 ************************************ 00:05:12.163 END TEST hugepages 00:05:12.163 ************************************ 00:05:12.163 10:51:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:12.163 10:51:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:12.163 10:51:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.163 10:51:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.163 10:51:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.422 ************************************ 00:05:12.422 START TEST driver 00:05:12.422 ************************************ 00:05:12.422 10:51:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:12.422 * Looking for test storage... 00:05:12.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:12.422 10:51:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:12.422 10:51:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.422 10:51:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.964 10:51:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:14.964 10:51:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.964 10:51:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.964 10:51:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.964 ************************************ 00:05:14.964 START TEST guess_driver 00:05:14.964 ************************************ 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:14.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:14.964 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:14.965 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:14.965 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:14.965 Looking for driver=vfio-pci 00:05:14.965 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.965 10:51:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:14.965 10:51:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.965 10:51:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.345 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.346 10:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.283 10:51:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.572 00:05:20.572 real 0m5.023s 00:05:20.572 user 0m1.158s 00:05:20.572 sys 0m1.937s 00:05:20.572 10:51:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.572 10:51:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.572 ************************************ 00:05:20.572 END TEST guess_driver 00:05:20.572 ************************************ 00:05:20.572 10:51:34 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:20.572 00:05:20.572 real 0m7.698s 00:05:20.572 user 0m1.739s 00:05:20.572 sys 0m2.988s 00:05:20.572 10:51:34 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.572 10:51:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:20.572 ************************************ 00:05:20.572 END TEST driver 00:05:20.572 ************************************ 00:05:20.572 10:51:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:20.572 10:51:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:20.572 10:51:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.572 10:51:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.572 10:51:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.572 ************************************ 00:05:20.572 START TEST devices 00:05:20.572 ************************************ 00:05:20.572 10:51:34 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:20.572 * Looking for test storage... 00:05:20.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:20.572 10:51:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:20.572 10:51:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:20.572 10:51:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.572 10:51:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.510 10:51:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:21.510 10:51:35 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:21.510 No valid GPT data, bailing 00:05:21.510 10:51:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.510 10:51:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:21.510 10:51:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:21.510 10:51:35 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:21.510 10:51:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.771 10:51:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:21.771 10:51:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:21.771 10:51:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:21.771 10:51:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:21.771 10:51:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.771 10:51:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.771 10:51:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.771 ************************************ 00:05:21.771 START TEST nvme_mount 00:05:21.771 ************************************ 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.771 10:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:22.715 Creating new GPT entries in memory. 00:05:22.715 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.715 other utilities. 00:05:22.715 10:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.715 10:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.715 10:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.715 10:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.715 10:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:23.655 Creating new GPT entries in memory. 00:05:23.655 The operation has completed successfully. 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 106614 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:23.655 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.914 10:51:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.298 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.298 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:25.298 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.298 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.299 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.299 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.560 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:25.560 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:25.560 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.560 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.560 10:51:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:26.944 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.945 10:51:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:27.886 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.147 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.147 00:05:28.147 real 0m6.536s 00:05:28.147 user 0m1.510s 00:05:28.147 sys 0m2.612s 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.147 10:51:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:28.147 ************************************ 00:05:28.147 END TEST nvme_mount 00:05:28.147 ************************************ 00:05:28.147 10:51:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:28.147 10:51:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:28.147 10:51:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.147 10:51:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.147 10:51:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:28.147 ************************************ 00:05:28.147 START TEST dm_mount 00:05:28.147 ************************************ 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:28.147 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.148 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:28.148 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:28.148 10:51:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:29.533 Creating new GPT entries in memory. 00:05:29.533 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:29.533 other utilities. 00:05:29.533 10:51:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:29.533 10:51:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.533 10:51:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.533 10:51:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.533 10:51:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:30.474 Creating new GPT entries in memory. 00:05:30.474 The operation has completed successfully. 00:05:30.474 10:51:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:30.474 10:51:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.474 10:51:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.474 10:51:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.474 10:51:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:31.414 The operation has completed successfully. 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 108998 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.414 10:51:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:32.355 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.615 10:51:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.995 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.996 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.996 00:05:33.996 real 0m5.816s 00:05:33.996 user 0m0.999s 00:05:33.996 sys 0m1.660s 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.996 10:51:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.996 ************************************ 00:05:33.996 END TEST dm_mount 00:05:33.996 ************************************ 00:05:33.996 10:51:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.996 10:51:48 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.255 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:34.255 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:34.255 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.255 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.255 10:51:48 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:34.255 00:05:34.255 real 0m14.347s 00:05:34.255 user 0m3.203s 00:05:34.255 sys 0m5.336s 00:05:34.256 10:51:48 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.256 10:51:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:34.256 ************************************ 00:05:34.256 END TEST devices 00:05:34.256 ************************************ 00:05:34.514 10:51:48 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:34.514 00:05:34.514 real 0m45.252s 00:05:34.514 user 0m12.914s 00:05:34.514 sys 0m20.325s 00:05:34.514 10:51:48 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.514 10:51:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:34.514 ************************************ 00:05:34.514 END TEST setup.sh 00:05:34.514 ************************************ 00:05:34.514 10:51:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.514 10:51:48 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:35.452 Hugepages 00:05:35.452 node hugesize free / total 00:05:35.452 node0 1048576kB 0 / 0 00:05:35.452 node0 2048kB 2048 / 2048 00:05:35.452 node1 1048576kB 0 / 0 00:05:35.452 node1 2048kB 0 / 0 00:05:35.452 00:05:35.452 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.452 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:35.452 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:35.712 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:35.712 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:35.712 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:35.712 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:35.712 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:35.712 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:35.712 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:35.712 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:35.712 10:51:49 -- spdk/autotest.sh@130 -- # uname -s 00:05:35.712 10:51:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:35.712 10:51:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:35.712 10:51:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:37.094 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.094 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:37.094 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:38.035 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:38.035 10:51:52 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:38.975 10:51:53 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:38.975 10:51:53 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:38.975 10:51:53 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:38.975 10:51:53 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:38.975 10:51:53 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:38.975 10:51:53 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:38.975 10:51:53 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.975 10:51:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:38.975 10:51:53 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:38.975 10:51:53 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:38.975 10:51:53 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:38.975 10:51:53 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:40.365 Waiting for block devices as requested 00:05:40.365 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:40.365 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:40.365 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:40.624 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:40.624 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:40.624 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:40.624 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:40.884 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:40.884 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:40.884 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:40.884 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:41.144 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:41.144 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:41.144 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:41.403 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:41.403 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:41.403 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:41.662 10:51:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:41.662 10:51:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:41.662 10:51:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:41.662 10:51:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:41.662 10:51:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:41.662 10:51:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:41.662 10:51:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:41.662 10:51:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:41.662 10:51:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:41.662 10:51:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:41.662 10:51:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:41.662 10:51:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:41.662 10:51:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:41.662 10:51:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:41.662 10:51:55 -- common/autotest_common.sh@1557 -- # continue 00:05:41.662 10:51:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:41.662 10:51:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.662 10:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.662 10:51:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:41.662 10:51:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.662 10:51:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.662 10:51:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.043 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.043 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.043 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.043 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.043 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.044 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.044 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.044 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:43.044 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:43.984 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.984 10:51:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:43.984 10:51:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.984 10:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.984 10:51:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:43.984 10:51:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:43.984 10:51:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.984 10:51:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:43.984 10:51:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:43.984 10:51:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:43.984 10:51:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:43.984 10:51:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:43.984 10:51:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.984 10:51:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.984 10:51:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:43.984 10:51:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:43.984 10:51:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:43.984 10:51:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:43.984 10:51:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:43.984 10:51:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:43.984 10:51:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:43.984 10:51:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:43.984 10:51:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:43.984 10:51:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:43.984 10:51:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=114295 00:05:43.984 10:51:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.984 10:51:58 -- common/autotest_common.sh@1598 -- # waitforlisten 114295 00:05:43.984 10:51:58 -- common/autotest_common.sh@829 -- # '[' -z 114295 ']' 00:05:43.984 10:51:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.984 10:51:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.984 10:51:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.984 10:51:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.984 10:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.244 [2024-07-11 10:51:58.436638] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:05:44.244 [2024-07-11 10:51:58.436728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114295 ] 00:05:44.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.244 [2024-07-11 10:51:58.495187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.244 [2024-07-11 10:51:58.575084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.502 10:51:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.502 10:51:58 -- common/autotest_common.sh@862 -- # return 0 00:05:44.502 10:51:58 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:44.502 10:51:58 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:44.502 10:51:58 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:47.806 nvme0n1 00:05:47.806 10:52:01 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:47.806 [2024-07-11 10:52:02.159717] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:47.806 [2024-07-11 10:52:02.159797] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:47.806 request: 00:05:47.806 { 00:05:47.807 "nvme_ctrlr_name": "nvme0", 00:05:47.807 "password": "test", 00:05:47.807 "method": "bdev_nvme_opal_revert", 00:05:47.807 "req_id": 1 00:05:47.807 } 00:05:47.807 Got JSON-RPC error response 00:05:47.807 response: 00:05:47.807 { 00:05:47.807 "code": -32603, 00:05:47.807 "message": "Internal error" 00:05:47.807 } 00:05:47.807 10:52:02 -- common/autotest_common.sh@1604 -- # true 00:05:47.807 10:52:02 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:47.807 10:52:02 -- common/autotest_common.sh@1608 -- # killprocess 114295 00:05:47.807 10:52:02 -- common/autotest_common.sh@948 -- # '[' -z 114295 ']' 00:05:47.807 10:52:02 -- common/autotest_common.sh@952 -- # kill -0 114295 00:05:47.807 10:52:02 -- common/autotest_common.sh@953 -- # uname 00:05:47.807 10:52:02 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.807 10:52:02 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114295 00:05:47.807 10:52:02 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.807 10:52:02 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.807 10:52:02 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114295' 00:05:47.807 killing process with pid 114295 00:05:47.807 10:52:02 -- common/autotest_common.sh@967 -- # kill 114295 00:05:47.807 10:52:02 -- common/autotest_common.sh@972 -- # wait 114295 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.065 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:48.066 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:49.961 10:52:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:49.961 10:52:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:49.961 10:52:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:49.961 10:52:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:49.961 10:52:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:49.961 10:52:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.961 10:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.961 10:52:03 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:49.961 10:52:03 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:49.961 10:52:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.961 10:52:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.961 10:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.961 ************************************ 00:05:49.961 START TEST env 00:05:49.961 ************************************ 00:05:49.961 10:52:03 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:49.961 * Looking for test storage... 00:05:49.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:49.961 10:52:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:49.961 10:52:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.961 10:52:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.961 10:52:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.961 ************************************ 00:05:49.961 START TEST env_memory 00:05:49.961 ************************************ 00:05:49.961 10:52:04 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:49.961 00:05:49.961 00:05:49.961 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.961 http://cunit.sourceforge.net/ 00:05:49.961 00:05:49.961 00:05:49.961 Suite: memory 00:05:49.961 Test: alloc and free memory map ...[2024-07-11 10:52:04.082498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:49.961 passed 00:05:49.961 Test: mem map translation ...[2024-07-11 10:52:04.104164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:49.961 [2024-07-11 10:52:04.104186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:49.961 [2024-07-11 10:52:04.104242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:49.961 [2024-07-11 10:52:04.104255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:49.961 passed 00:05:49.961 Test: mem map registration ...[2024-07-11 10:52:04.149703] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:49.961 [2024-07-11 10:52:04.149723] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:49.961 passed 00:05:49.961 Test: mem map adjacent registrations ...passed 00:05:49.961 00:05:49.961 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.961 suites 1 1 n/a 0 0 00:05:49.961 tests 4 4 4 0 0 00:05:49.961 asserts 152 152 152 0 n/a 00:05:49.961 00:05:49.961 Elapsed time = 0.149 seconds 00:05:49.961 00:05:49.961 real 0m0.158s 00:05:49.961 user 0m0.151s 00:05:49.961 sys 0m0.006s 00:05:49.961 10:52:04 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.961 10:52:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:49.961 ************************************ 00:05:49.961 END TEST env_memory 00:05:49.961 ************************************ 00:05:49.961 10:52:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:49.961 10:52:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:49.961 10:52:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.961 10:52:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.961 10:52:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.961 ************************************ 00:05:49.961 START TEST env_vtophys 00:05:49.961 ************************************ 00:05:49.961 10:52:04 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:49.961 EAL: lib.eal log level changed from notice to debug 00:05:49.961 EAL: Detected lcore 0 as core 0 on socket 0 00:05:49.961 EAL: Detected lcore 1 as core 1 on socket 0 00:05:49.961 EAL: Detected lcore 2 as core 2 on socket 0 00:05:49.961 EAL: Detected lcore 3 as core 3 on socket 0 00:05:49.961 EAL: Detected lcore 4 as core 4 on socket 0 00:05:49.961 EAL: Detected lcore 5 as core 5 on socket 0 00:05:49.961 EAL: Detected lcore 6 as core 8 on socket 0 00:05:49.961 EAL: Detected lcore 7 as core 9 on socket 0 00:05:49.961 EAL: Detected lcore 8 as core 10 on socket 0 00:05:49.961 EAL: Detected lcore 9 as core 11 on socket 0 00:05:49.961 EAL: Detected lcore 10 as core 12 on socket 0 00:05:49.961 EAL: Detected lcore 11 as core 13 on socket 0 00:05:49.961 EAL: Detected lcore 12 as core 0 on socket 1 00:05:49.961 EAL: Detected lcore 13 as core 1 on socket 1 00:05:49.961 EAL: Detected lcore 14 as core 2 on socket 1 00:05:49.961 EAL: Detected lcore 15 as core 3 on socket 1 00:05:49.961 EAL: Detected lcore 16 as core 4 on socket 1 00:05:49.961 EAL: Detected lcore 17 as core 5 on socket 1 00:05:49.961 EAL: Detected lcore 18 as core 8 on socket 1 00:05:49.961 EAL: Detected lcore 19 as core 9 on socket 1 00:05:49.961 EAL: Detected lcore 20 as core 10 on socket 1 00:05:49.961 EAL: Detected lcore 21 as core 11 on socket 1 00:05:49.961 EAL: Detected lcore 22 as core 12 on socket 1 00:05:49.961 EAL: Detected lcore 23 as core 13 on socket 1 00:05:49.961 EAL: Detected lcore 24 as core 0 on socket 0 00:05:49.961 EAL: Detected lcore 25 as core 1 on socket 0 00:05:49.961 EAL: Detected lcore 26 as core 2 on socket 0 00:05:49.961 EAL: Detected lcore 27 as core 3 on socket 0 00:05:49.961 EAL: Detected lcore 28 as core 4 on socket 0 00:05:49.961 EAL: Detected lcore 29 as core 5 on socket 0 00:05:49.961 EAL: Detected lcore 30 as core 8 on socket 0 00:05:49.961 EAL: Detected lcore 31 as core 9 on socket 0 00:05:49.961 EAL: Detected lcore 32 as core 10 on socket 0 00:05:49.961 EAL: Detected lcore 33 as core 11 on socket 0 00:05:49.961 EAL: Detected lcore 34 as core 12 on socket 0 00:05:49.961 EAL: Detected lcore 35 as core 13 on socket 0 00:05:49.961 EAL: Detected lcore 36 as core 0 on socket 1 00:05:49.961 EAL: Detected lcore 37 as core 1 on socket 1 00:05:49.961 EAL: Detected lcore 38 as core 2 on socket 1 00:05:49.961 EAL: Detected lcore 39 as core 3 on socket 1 00:05:49.961 EAL: Detected lcore 40 as core 4 on socket 1 00:05:49.961 EAL: Detected lcore 41 as core 5 on socket 1 00:05:49.961 EAL: Detected lcore 42 as core 8 on socket 1 00:05:49.961 EAL: Detected lcore 43 as core 9 on socket 1 00:05:49.961 EAL: Detected lcore 44 as core 10 on socket 1 00:05:49.961 EAL: Detected lcore 45 as core 11 on socket 1 00:05:49.961 EAL: Detected lcore 46 as core 12 on socket 1 00:05:49.961 EAL: Detected lcore 47 as core 13 on socket 1 00:05:49.961 EAL: Maximum logical cores by configuration: 128 00:05:49.961 EAL: Detected CPU lcores: 48 00:05:49.961 EAL: Detected NUMA nodes: 2 00:05:49.961 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:49.961 EAL: Detected shared linkage of DPDK 00:05:49.961 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:49.961 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:49.961 EAL: Registered [vdev] bus. 00:05:49.961 EAL: bus.vdev log level changed from disabled to notice 00:05:49.961 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:49.961 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:49.961 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:49.961 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:49.961 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:49.961 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:49.962 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:49.962 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:49.962 EAL: No shared files mode enabled, IPC will be disabled 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Bus pci wants IOVA as 'DC' 00:05:49.962 EAL: Bus vdev wants IOVA as 'DC' 00:05:49.962 EAL: Buses did not request a specific IOVA mode. 00:05:49.962 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:49.962 EAL: Selected IOVA mode 'VA' 00:05:49.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.962 EAL: Probing VFIO support... 00:05:49.962 EAL: IOMMU type 1 (Type 1) is supported 00:05:49.962 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:49.962 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:49.962 EAL: VFIO support initialized 00:05:49.962 EAL: Ask a virtual area of 0x2e000 bytes 00:05:49.962 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:49.962 EAL: Setting up physically contiguous memory... 00:05:49.962 EAL: Setting maximum number of open files to 524288 00:05:49.962 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:49.962 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:49.962 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:49.962 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:49.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:49.962 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:49.962 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:49.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:49.962 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:49.962 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:49.962 EAL: Hugepages will be freed exactly as allocated. 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: TSC frequency is ~2700000 KHz 00:05:49.962 EAL: Main lcore 0 is ready (tid=7ff3d31c4a00;cpuset=[0]) 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 0 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 2MB 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:49.962 EAL: Mem event callback 'spdk:(nil)' registered 00:05:49.962 00:05:49.962 00:05:49.962 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.962 http://cunit.sourceforge.net/ 00:05:49.962 00:05:49.962 00:05:49.962 Suite: components_suite 00:05:49.962 Test: vtophys_malloc_test ...passed 00:05:49.962 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 4 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 4MB 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was shrunk by 4MB 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 4 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 6MB 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was shrunk by 6MB 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 4 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 10MB 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was shrunk by 10MB 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 4 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 18MB 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was shrunk by 18MB 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 4 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 34MB 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was shrunk by 34MB 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.962 EAL: Restoring previous memory policy: 4 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was expanded by 66MB 00:05:49.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.962 EAL: request: mp_malloc_sync 00:05:49.962 EAL: No shared files mode enabled, IPC is disabled 00:05:49.962 EAL: Heap on socket 0 was shrunk by 66MB 00:05:49.962 EAL: Trying to obtain current memory policy. 00:05:49.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.219 EAL: Restoring previous memory policy: 4 00:05:50.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.219 EAL: request: mp_malloc_sync 00:05:50.219 EAL: No shared files mode enabled, IPC is disabled 00:05:50.219 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.219 EAL: request: mp_malloc_sync 00:05:50.219 EAL: No shared files mode enabled, IPC is disabled 00:05:50.219 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.219 EAL: Trying to obtain current memory policy. 00:05:50.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.219 EAL: Restoring previous memory policy: 4 00:05:50.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.219 EAL: request: mp_malloc_sync 00:05:50.219 EAL: No shared files mode enabled, IPC is disabled 00:05:50.219 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.219 EAL: request: mp_malloc_sync 00:05:50.219 EAL: No shared files mode enabled, IPC is disabled 00:05:50.219 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.219 EAL: Trying to obtain current memory policy. 00:05:50.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.476 EAL: Restoring previous memory policy: 4 00:05:50.476 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.476 EAL: request: mp_malloc_sync 00:05:50.476 EAL: No shared files mode enabled, IPC is disabled 00:05:50.476 EAL: Heap on socket 0 was expanded by 514MB 00:05:50.476 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.733 EAL: request: mp_malloc_sync 00:05:50.733 EAL: No shared files mode enabled, IPC is disabled 00:05:50.733 EAL: Heap on socket 0 was shrunk by 514MB 00:05:50.733 EAL: Trying to obtain current memory policy. 00:05:50.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.990 EAL: Restoring previous memory policy: 4 00:05:50.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.990 EAL: request: mp_malloc_sync 00:05:50.990 EAL: No shared files mode enabled, IPC is disabled 00:05:50.990 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.249 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.249 EAL: request: mp_malloc_sync 00:05:51.249 EAL: No shared files mode enabled, IPC is disabled 00:05:51.249 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.249 passed 00:05:51.249 00:05:51.249 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.249 suites 1 1 n/a 0 0 00:05:51.249 tests 2 2 2 0 0 00:05:51.249 asserts 497 497 497 0 n/a 00:05:51.249 00:05:51.249 Elapsed time = 1.298 seconds 00:05:51.249 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.249 EAL: request: mp_malloc_sync 00:05:51.249 EAL: No shared files mode enabled, IPC is disabled 00:05:51.249 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.249 EAL: No shared files mode enabled, IPC is disabled 00:05:51.249 EAL: No shared files mode enabled, IPC is disabled 00:05:51.249 EAL: No shared files mode enabled, IPC is disabled 00:05:51.249 00:05:51.249 real 0m1.404s 00:05:51.249 user 0m0.813s 00:05:51.249 sys 0m0.560s 00:05:51.249 10:52:05 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.249 10:52:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 ************************************ 00:05:51.249 END TEST env_vtophys 00:05:51.249 ************************************ 00:05:51.508 10:52:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:51.508 10:52:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.508 10:52:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.508 10:52:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.508 10:52:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.508 ************************************ 00:05:51.508 START TEST env_pci 00:05:51.508 ************************************ 00:05:51.508 10:52:05 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:51.508 00:05:51.508 00:05:51.508 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.508 http://cunit.sourceforge.net/ 00:05:51.508 00:05:51.508 00:05:51.508 Suite: pci 00:05:51.508 Test: pci_hook ...[2024-07-11 10:52:05.716720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 115183 has claimed it 00:05:51.508 EAL: Cannot find device (10000:00:01.0) 00:05:51.508 EAL: Failed to attach device on primary process 00:05:51.508 passed 00:05:51.508 00:05:51.508 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.508 suites 1 1 n/a 0 0 00:05:51.508 tests 1 1 1 0 0 00:05:51.508 asserts 25 25 25 0 n/a 00:05:51.508 00:05:51.508 Elapsed time = 0.021 seconds 00:05:51.508 00:05:51.508 real 0m0.033s 00:05:51.508 user 0m0.005s 00:05:51.508 sys 0m0.027s 00:05:51.508 10:52:05 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.508 10:52:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:51.508 ************************************ 00:05:51.508 END TEST env_pci 00:05:51.508 ************************************ 00:05:51.508 10:52:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:51.508 10:52:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:51.508 10:52:05 env -- env/env.sh@15 -- # uname 00:05:51.508 10:52:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:51.508 10:52:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:51.508 10:52:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.508 10:52:05 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:51.508 10:52:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.508 10:52:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.508 ************************************ 00:05:51.508 START TEST env_dpdk_post_init 00:05:51.508 ************************************ 00:05:51.508 10:52:05 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.508 EAL: Detected CPU lcores: 48 00:05:51.508 EAL: Detected NUMA nodes: 2 00:05:51.508 EAL: Detected shared linkage of DPDK 00:05:51.508 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:51.508 EAL: Selected IOVA mode 'VA' 00:05:51.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.508 EAL: VFIO support initialized 00:05:51.508 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:51.508 EAL: Using IOMMU type 1 (Type 1) 00:05:51.508 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:51.508 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:51.508 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:51.508 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:51.767 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:52.707 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:55.993 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:55.993 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:55.993 Starting DPDK initialization... 00:05:55.993 Starting SPDK post initialization... 00:05:55.993 SPDK NVMe probe 00:05:55.993 Attaching to 0000:88:00.0 00:05:55.993 Attached to 0000:88:00.0 00:05:55.993 Cleaning up... 00:05:55.993 00:05:55.993 real 0m4.403s 00:05:55.993 user 0m3.267s 00:05:55.993 sys 0m0.190s 00:05:55.993 10:52:10 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.993 10:52:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.993 ************************************ 00:05:55.993 END TEST env_dpdk_post_init 00:05:55.993 ************************************ 00:05:55.993 10:52:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:55.993 10:52:10 env -- env/env.sh@26 -- # uname 00:05:55.993 10:52:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:55.993 10:52:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.993 10:52:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.993 10:52:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.993 10:52:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.993 ************************************ 00:05:55.993 START TEST env_mem_callbacks 00:05:55.993 ************************************ 00:05:55.993 10:52:10 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.993 EAL: Detected CPU lcores: 48 00:05:55.993 EAL: Detected NUMA nodes: 2 00:05:55.993 EAL: Detected shared linkage of DPDK 00:05:55.993 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:55.993 EAL: Selected IOVA mode 'VA' 00:05:55.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.993 EAL: VFIO support initialized 00:05:55.993 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:55.993 00:05:55.993 00:05:55.993 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.993 http://cunit.sourceforge.net/ 00:05:55.993 00:05:55.993 00:05:55.993 Suite: memory 00:05:55.993 Test: test ... 00:05:55.993 register 0x200000200000 2097152 00:05:55.993 malloc 3145728 00:05:55.993 register 0x200000400000 4194304 00:05:55.993 buf 0x200000500000 len 3145728 PASSED 00:05:55.993 malloc 64 00:05:55.993 buf 0x2000004fff40 len 64 PASSED 00:05:55.993 malloc 4194304 00:05:55.993 register 0x200000800000 6291456 00:05:55.993 buf 0x200000a00000 len 4194304 PASSED 00:05:55.993 free 0x200000500000 3145728 00:05:55.993 free 0x2000004fff40 64 00:05:55.993 unregister 0x200000400000 4194304 PASSED 00:05:55.993 free 0x200000a00000 4194304 00:05:55.993 unregister 0x200000800000 6291456 PASSED 00:05:55.993 malloc 8388608 00:05:55.993 register 0x200000400000 10485760 00:05:55.993 buf 0x200000600000 len 8388608 PASSED 00:05:55.993 free 0x200000600000 8388608 00:05:55.993 unregister 0x200000400000 10485760 PASSED 00:05:55.993 passed 00:05:55.993 00:05:55.993 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.993 suites 1 1 n/a 0 0 00:05:55.993 tests 1 1 1 0 0 00:05:55.993 asserts 15 15 15 0 n/a 00:05:55.993 00:05:55.993 Elapsed time = 0.005 seconds 00:05:55.993 00:05:55.993 real 0m0.046s 00:05:55.993 user 0m0.009s 00:05:55.993 sys 0m0.037s 00:05:55.993 10:52:10 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.993 10:52:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:55.993 ************************************ 00:05:55.993 END TEST env_mem_callbacks 00:05:55.993 ************************************ 00:05:55.993 10:52:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:55.993 00:05:55.993 real 0m6.340s 00:05:55.993 user 0m4.372s 00:05:55.993 sys 0m1.011s 00:05:55.993 10:52:10 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.993 10:52:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.993 ************************************ 00:05:55.993 END TEST env 00:05:55.993 ************************************ 00:05:55.993 10:52:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.993 10:52:10 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:55.993 10:52:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.993 10:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.993 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:55.993 ************************************ 00:05:55.993 START TEST rpc 00:05:55.993 ************************************ 00:05:55.993 10:52:10 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:55.993 * Looking for test storage... 00:05:55.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:55.993 10:52:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=115842 00:05:55.993 10:52:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:55.993 10:52:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.993 10:52:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 115842 00:05:55.993 10:52:10 rpc -- common/autotest_common.sh@829 -- # '[' -z 115842 ']' 00:05:55.993 10:52:10 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.994 10:52:10 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.994 10:52:10 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.994 10:52:10 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.994 10:52:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.253 [2024-07-11 10:52:10.463698] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:05:56.253 [2024-07-11 10:52:10.463797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115842 ] 00:05:56.253 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.253 [2024-07-11 10:52:10.520149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.253 [2024-07-11 10:52:10.604581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:56.253 [2024-07-11 10:52:10.604638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 115842' to capture a snapshot of events at runtime. 00:05:56.253 [2024-07-11 10:52:10.604666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.253 [2024-07-11 10:52:10.604678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.253 [2024-07-11 10:52:10.604688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid115842 for offline analysis/debug. 00:05:56.253 [2024-07-11 10:52:10.604716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.512 10:52:10 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.512 10:52:10 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.512 10:52:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.512 10:52:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:56.512 10:52:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:56.512 10:52:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:56.512 10:52:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.512 10:52:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.512 10:52:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.512 ************************************ 00:05:56.512 START TEST rpc_integrity 00:05:56.512 ************************************ 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:56.512 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.512 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.771 { 00:05:56.771 "name": "Malloc0", 00:05:56.771 "aliases": [ 00:05:56.771 "bf1f0cc5-ffb7-46f0-938d-ac5b600faefa" 00:05:56.771 ], 00:05:56.771 "product_name": "Malloc disk", 00:05:56.771 "block_size": 512, 00:05:56.771 "num_blocks": 16384, 00:05:56.771 "uuid": "bf1f0cc5-ffb7-46f0-938d-ac5b600faefa", 00:05:56.771 "assigned_rate_limits": { 00:05:56.771 "rw_ios_per_sec": 0, 00:05:56.771 "rw_mbytes_per_sec": 0, 00:05:56.771 "r_mbytes_per_sec": 0, 00:05:56.771 "w_mbytes_per_sec": 0 00:05:56.771 }, 00:05:56.771 "claimed": false, 00:05:56.771 "zoned": false, 00:05:56.771 "supported_io_types": { 00:05:56.771 "read": true, 00:05:56.771 "write": true, 00:05:56.771 "unmap": true, 00:05:56.771 "flush": true, 00:05:56.771 "reset": true, 00:05:56.771 "nvme_admin": false, 00:05:56.771 "nvme_io": false, 00:05:56.771 "nvme_io_md": false, 00:05:56.771 "write_zeroes": true, 00:05:56.771 "zcopy": true, 00:05:56.771 "get_zone_info": false, 00:05:56.771 "zone_management": false, 00:05:56.771 "zone_append": false, 00:05:56.771 "compare": false, 00:05:56.771 "compare_and_write": false, 00:05:56.771 "abort": true, 00:05:56.771 "seek_hole": false, 00:05:56.771 "seek_data": false, 00:05:56.771 "copy": true, 00:05:56.771 "nvme_iov_md": false 00:05:56.771 }, 00:05:56.771 "memory_domains": [ 00:05:56.771 { 00:05:56.771 "dma_device_id": "system", 00:05:56.771 "dma_device_type": 1 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.771 "dma_device_type": 2 00:05:56.771 } 00:05:56.771 ], 00:05:56.771 "driver_specific": {} 00:05:56.771 } 00:05:56.771 ]' 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 [2024-07-11 10:52:10.978803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:56.771 [2024-07-11 10:52:10.978847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.771 [2024-07-11 10:52:10.978875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1abcaf0 00:05:56.771 [2024-07-11 10:52:10.978890] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.771 [2024-07-11 10:52:10.980177] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.771 [2024-07-11 10:52:10.980198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.771 Passthru0 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 10:52:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.771 { 00:05:56.771 "name": "Malloc0", 00:05:56.771 "aliases": [ 00:05:56.771 "bf1f0cc5-ffb7-46f0-938d-ac5b600faefa" 00:05:56.771 ], 00:05:56.771 "product_name": "Malloc disk", 00:05:56.771 "block_size": 512, 00:05:56.771 "num_blocks": 16384, 00:05:56.771 "uuid": "bf1f0cc5-ffb7-46f0-938d-ac5b600faefa", 00:05:56.771 "assigned_rate_limits": { 00:05:56.771 "rw_ios_per_sec": 0, 00:05:56.771 "rw_mbytes_per_sec": 0, 00:05:56.771 "r_mbytes_per_sec": 0, 00:05:56.771 "w_mbytes_per_sec": 0 00:05:56.771 }, 00:05:56.771 "claimed": true, 00:05:56.771 "claim_type": "exclusive_write", 00:05:56.771 "zoned": false, 00:05:56.771 "supported_io_types": { 00:05:56.771 "read": true, 00:05:56.771 "write": true, 00:05:56.771 "unmap": true, 00:05:56.771 "flush": true, 00:05:56.771 "reset": true, 00:05:56.771 "nvme_admin": false, 00:05:56.771 "nvme_io": false, 00:05:56.771 "nvme_io_md": false, 00:05:56.771 "write_zeroes": true, 00:05:56.771 "zcopy": true, 00:05:56.771 "get_zone_info": false, 00:05:56.771 "zone_management": false, 00:05:56.771 "zone_append": false, 00:05:56.771 "compare": false, 00:05:56.771 "compare_and_write": false, 00:05:56.771 "abort": true, 00:05:56.771 "seek_hole": false, 00:05:56.771 "seek_data": false, 00:05:56.771 "copy": true, 00:05:56.771 "nvme_iov_md": false 00:05:56.771 }, 00:05:56.771 "memory_domains": [ 00:05:56.771 { 00:05:56.771 "dma_device_id": "system", 00:05:56.771 "dma_device_type": 1 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.771 "dma_device_type": 2 00:05:56.771 } 00:05:56.771 ], 00:05:56.771 "driver_specific": {} 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "name": "Passthru0", 00:05:56.771 "aliases": [ 00:05:56.771 "2496f34c-70fd-51c3-bf67-8423723a8385" 00:05:56.771 ], 00:05:56.771 "product_name": "passthru", 00:05:56.771 "block_size": 512, 00:05:56.771 "num_blocks": 16384, 00:05:56.771 "uuid": "2496f34c-70fd-51c3-bf67-8423723a8385", 00:05:56.771 "assigned_rate_limits": { 00:05:56.771 "rw_ios_per_sec": 0, 00:05:56.771 "rw_mbytes_per_sec": 0, 00:05:56.771 "r_mbytes_per_sec": 0, 00:05:56.771 "w_mbytes_per_sec": 0 00:05:56.771 }, 00:05:56.771 "claimed": false, 00:05:56.771 "zoned": false, 00:05:56.771 "supported_io_types": { 00:05:56.771 "read": true, 00:05:56.771 "write": true, 00:05:56.771 "unmap": true, 00:05:56.771 "flush": true, 00:05:56.771 "reset": true, 00:05:56.771 "nvme_admin": false, 00:05:56.771 "nvme_io": false, 00:05:56.771 "nvme_io_md": false, 00:05:56.771 "write_zeroes": true, 00:05:56.771 "zcopy": true, 00:05:56.771 "get_zone_info": false, 00:05:56.771 "zone_management": false, 00:05:56.771 "zone_append": false, 00:05:56.771 "compare": false, 00:05:56.771 "compare_and_write": false, 00:05:56.771 "abort": true, 00:05:56.771 "seek_hole": false, 00:05:56.771 "seek_data": false, 00:05:56.771 "copy": true, 00:05:56.771 "nvme_iov_md": false 00:05:56.771 }, 00:05:56.771 "memory_domains": [ 00:05:56.771 { 00:05:56.771 "dma_device_id": "system", 00:05:56.771 "dma_device_type": 1 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.771 "dma_device_type": 2 00:05:56.771 } 00:05:56.771 ], 00:05:56.771 "driver_specific": { 00:05:56.771 "passthru": { 00:05:56.771 "name": "Passthru0", 00:05:56.771 "base_bdev_name": "Malloc0" 00:05:56.771 } 00:05:56.771 } 00:05:56.771 } 00:05:56.771 ]' 00:05:56.771 10:52:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.771 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.771 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.771 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.771 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.771 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.772 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.772 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.772 10:52:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.772 00:05:56.772 real 0m0.215s 00:05:56.772 user 0m0.139s 00:05:56.772 sys 0m0.021s 00:05:56.772 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.772 10:52:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 ************************************ 00:05:56.772 END TEST rpc_integrity 00:05:56.772 ************************************ 00:05:56.772 10:52:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.772 10:52:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:56.772 10:52:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.772 10:52:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.772 10:52:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 ************************************ 00:05:56.772 START TEST rpc_plugins 00:05:56.772 ************************************ 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:56.772 { 00:05:56.772 "name": "Malloc1", 00:05:56.772 "aliases": [ 00:05:56.772 "e117c64f-b20f-4ec9-90d9-0b399939b069" 00:05:56.772 ], 00:05:56.772 "product_name": "Malloc disk", 00:05:56.772 "block_size": 4096, 00:05:56.772 "num_blocks": 256, 00:05:56.772 "uuid": "e117c64f-b20f-4ec9-90d9-0b399939b069", 00:05:56.772 "assigned_rate_limits": { 00:05:56.772 "rw_ios_per_sec": 0, 00:05:56.772 "rw_mbytes_per_sec": 0, 00:05:56.772 "r_mbytes_per_sec": 0, 00:05:56.772 "w_mbytes_per_sec": 0 00:05:56.772 }, 00:05:56.772 "claimed": false, 00:05:56.772 "zoned": false, 00:05:56.772 "supported_io_types": { 00:05:56.772 "read": true, 00:05:56.772 "write": true, 00:05:56.772 "unmap": true, 00:05:56.772 "flush": true, 00:05:56.772 "reset": true, 00:05:56.772 "nvme_admin": false, 00:05:56.772 "nvme_io": false, 00:05:56.772 "nvme_io_md": false, 00:05:56.772 "write_zeroes": true, 00:05:56.772 "zcopy": true, 00:05:56.772 "get_zone_info": false, 00:05:56.772 "zone_management": false, 00:05:56.772 "zone_append": false, 00:05:56.772 "compare": false, 00:05:56.772 "compare_and_write": false, 00:05:56.772 "abort": true, 00:05:56.772 "seek_hole": false, 00:05:56.772 "seek_data": false, 00:05:56.772 "copy": true, 00:05:56.772 "nvme_iov_md": false 00:05:56.772 }, 00:05:56.772 "memory_domains": [ 00:05:56.772 { 00:05:56.772 "dma_device_id": "system", 00:05:56.772 "dma_device_type": 1 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.772 "dma_device_type": 2 00:05:56.772 } 00:05:56.772 ], 00:05:56.772 "driver_specific": {} 00:05:56.772 } 00:05:56.772 ]' 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:56.772 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.772 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.030 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.030 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:57.030 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.030 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.030 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.030 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:57.030 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:57.030 10:52:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:57.030 00:05:57.030 real 0m0.104s 00:05:57.030 user 0m0.069s 00:05:57.030 sys 0m0.008s 00:05:57.030 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.030 10:52:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:57.030 ************************************ 00:05:57.030 END TEST rpc_plugins 00:05:57.030 ************************************ 00:05:57.030 10:52:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.030 10:52:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:57.030 10:52:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.030 10:52:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.030 10:52:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.030 ************************************ 00:05:57.030 START TEST rpc_trace_cmd_test 00:05:57.030 ************************************ 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:57.030 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid115842", 00:05:57.030 "tpoint_group_mask": "0x8", 00:05:57.030 "iscsi_conn": { 00:05:57.030 "mask": "0x2", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "scsi": { 00:05:57.030 "mask": "0x4", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "bdev": { 00:05:57.030 "mask": "0x8", 00:05:57.030 "tpoint_mask": "0xffffffffffffffff" 00:05:57.030 }, 00:05:57.030 "nvmf_rdma": { 00:05:57.030 "mask": "0x10", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "nvmf_tcp": { 00:05:57.030 "mask": "0x20", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "ftl": { 00:05:57.030 "mask": "0x40", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "blobfs": { 00:05:57.030 "mask": "0x80", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "dsa": { 00:05:57.030 "mask": "0x200", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "thread": { 00:05:57.030 "mask": "0x400", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "nvme_pcie": { 00:05:57.030 "mask": "0x800", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "iaa": { 00:05:57.030 "mask": "0x1000", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "nvme_tcp": { 00:05:57.030 "mask": "0x2000", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "bdev_nvme": { 00:05:57.030 "mask": "0x4000", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 }, 00:05:57.030 "sock": { 00:05:57.030 "mask": "0x8000", 00:05:57.030 "tpoint_mask": "0x0" 00:05:57.030 } 00:05:57.030 }' 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:57.030 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:57.031 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:57.031 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:57.289 10:52:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:57.289 00:05:57.289 real 0m0.181s 00:05:57.289 user 0m0.159s 00:05:57.289 sys 0m0.013s 00:05:57.289 10:52:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.289 10:52:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 ************************************ 00:05:57.289 END TEST rpc_trace_cmd_test 00:05:57.289 ************************************ 00:05:57.289 10:52:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.289 10:52:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:57.289 10:52:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:57.289 10:52:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:57.289 10:52:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.289 10:52:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.289 10:52:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 ************************************ 00:05:57.289 START TEST rpc_daemon_integrity 00:05:57.289 ************************************ 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.289 { 00:05:57.289 "name": "Malloc2", 00:05:57.289 "aliases": [ 00:05:57.289 "15851665-ad42-49dd-a326-9e4133750c69" 00:05:57.289 ], 00:05:57.289 "product_name": "Malloc disk", 00:05:57.289 "block_size": 512, 00:05:57.289 "num_blocks": 16384, 00:05:57.289 "uuid": "15851665-ad42-49dd-a326-9e4133750c69", 00:05:57.289 "assigned_rate_limits": { 00:05:57.289 "rw_ios_per_sec": 0, 00:05:57.289 "rw_mbytes_per_sec": 0, 00:05:57.289 "r_mbytes_per_sec": 0, 00:05:57.289 "w_mbytes_per_sec": 0 00:05:57.289 }, 00:05:57.289 "claimed": false, 00:05:57.289 "zoned": false, 00:05:57.289 "supported_io_types": { 00:05:57.289 "read": true, 00:05:57.289 "write": true, 00:05:57.289 "unmap": true, 00:05:57.289 "flush": true, 00:05:57.289 "reset": true, 00:05:57.289 "nvme_admin": false, 00:05:57.289 "nvme_io": false, 00:05:57.289 "nvme_io_md": false, 00:05:57.289 "write_zeroes": true, 00:05:57.289 "zcopy": true, 00:05:57.289 "get_zone_info": false, 00:05:57.289 "zone_management": false, 00:05:57.289 "zone_append": false, 00:05:57.289 "compare": false, 00:05:57.289 "compare_and_write": false, 00:05:57.289 "abort": true, 00:05:57.289 "seek_hole": false, 00:05:57.289 "seek_data": false, 00:05:57.289 "copy": true, 00:05:57.289 "nvme_iov_md": false 00:05:57.289 }, 00:05:57.289 "memory_domains": [ 00:05:57.289 { 00:05:57.289 "dma_device_id": "system", 00:05:57.289 "dma_device_type": 1 00:05:57.289 }, 00:05:57.289 { 00:05:57.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.289 "dma_device_type": 2 00:05:57.289 } 00:05:57.289 ], 00:05:57.289 "driver_specific": {} 00:05:57.289 } 00:05:57.289 ]' 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 [2024-07-11 10:52:11.612805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:57.289 [2024-07-11 10:52:11.612848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.289 [2024-07-11 10:52:11.612869] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x190c290 00:05:57.289 [2024-07-11 10:52:11.612883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.289 [2024-07-11 10:52:11.614025] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.289 [2024-07-11 10:52:11.614064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.289 Passthru0 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.289 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.290 { 00:05:57.290 "name": "Malloc2", 00:05:57.290 "aliases": [ 00:05:57.290 "15851665-ad42-49dd-a326-9e4133750c69" 00:05:57.290 ], 00:05:57.290 "product_name": "Malloc disk", 00:05:57.290 "block_size": 512, 00:05:57.290 "num_blocks": 16384, 00:05:57.290 "uuid": "15851665-ad42-49dd-a326-9e4133750c69", 00:05:57.290 "assigned_rate_limits": { 00:05:57.290 "rw_ios_per_sec": 0, 00:05:57.290 "rw_mbytes_per_sec": 0, 00:05:57.290 "r_mbytes_per_sec": 0, 00:05:57.290 "w_mbytes_per_sec": 0 00:05:57.290 }, 00:05:57.290 "claimed": true, 00:05:57.290 "claim_type": "exclusive_write", 00:05:57.290 "zoned": false, 00:05:57.290 "supported_io_types": { 00:05:57.290 "read": true, 00:05:57.290 "write": true, 00:05:57.290 "unmap": true, 00:05:57.290 "flush": true, 00:05:57.290 "reset": true, 00:05:57.290 "nvme_admin": false, 00:05:57.290 "nvme_io": false, 00:05:57.290 "nvme_io_md": false, 00:05:57.290 "write_zeroes": true, 00:05:57.290 "zcopy": true, 00:05:57.290 "get_zone_info": false, 00:05:57.290 "zone_management": false, 00:05:57.290 "zone_append": false, 00:05:57.290 "compare": false, 00:05:57.290 "compare_and_write": false, 00:05:57.290 "abort": true, 00:05:57.290 "seek_hole": false, 00:05:57.290 "seek_data": false, 00:05:57.290 "copy": true, 00:05:57.290 "nvme_iov_md": false 00:05:57.290 }, 00:05:57.290 "memory_domains": [ 00:05:57.290 { 00:05:57.290 "dma_device_id": "system", 00:05:57.290 "dma_device_type": 1 00:05:57.290 }, 00:05:57.290 { 00:05:57.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.290 "dma_device_type": 2 00:05:57.290 } 00:05:57.290 ], 00:05:57.290 "driver_specific": {} 00:05:57.290 }, 00:05:57.290 { 00:05:57.290 "name": "Passthru0", 00:05:57.290 "aliases": [ 00:05:57.290 "27d74108-9844-522a-910f-0f7bcb501905" 00:05:57.290 ], 00:05:57.290 "product_name": "passthru", 00:05:57.290 "block_size": 512, 00:05:57.290 "num_blocks": 16384, 00:05:57.290 "uuid": "27d74108-9844-522a-910f-0f7bcb501905", 00:05:57.290 "assigned_rate_limits": { 00:05:57.290 "rw_ios_per_sec": 0, 00:05:57.290 "rw_mbytes_per_sec": 0, 00:05:57.290 "r_mbytes_per_sec": 0, 00:05:57.290 "w_mbytes_per_sec": 0 00:05:57.290 }, 00:05:57.290 "claimed": false, 00:05:57.290 "zoned": false, 00:05:57.290 "supported_io_types": { 00:05:57.290 "read": true, 00:05:57.290 "write": true, 00:05:57.290 "unmap": true, 00:05:57.290 "flush": true, 00:05:57.290 "reset": true, 00:05:57.290 "nvme_admin": false, 00:05:57.290 "nvme_io": false, 00:05:57.290 "nvme_io_md": false, 00:05:57.290 "write_zeroes": true, 00:05:57.290 "zcopy": true, 00:05:57.290 "get_zone_info": false, 00:05:57.290 "zone_management": false, 00:05:57.290 "zone_append": false, 00:05:57.290 "compare": false, 00:05:57.290 "compare_and_write": false, 00:05:57.290 "abort": true, 00:05:57.290 "seek_hole": false, 00:05:57.290 "seek_data": false, 00:05:57.290 "copy": true, 00:05:57.290 "nvme_iov_md": false 00:05:57.290 }, 00:05:57.290 "memory_domains": [ 00:05:57.290 { 00:05:57.290 "dma_device_id": "system", 00:05:57.290 "dma_device_type": 1 00:05:57.290 }, 00:05:57.290 { 00:05:57.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.290 "dma_device_type": 2 00:05:57.290 } 00:05:57.290 ], 00:05:57.290 "driver_specific": { 00:05:57.290 "passthru": { 00:05:57.290 "name": "Passthru0", 00:05:57.290 "base_bdev_name": "Malloc2" 00:05:57.290 } 00:05:57.290 } 00:05:57.290 } 00:05:57.290 ]' 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.290 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.548 10:52:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.548 00:05:57.548 real 0m0.216s 00:05:57.548 user 0m0.141s 00:05:57.548 sys 0m0.017s 00:05:57.548 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.548 10:52:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.548 ************************************ 00:05:57.548 END TEST rpc_daemon_integrity 00:05:57.548 ************************************ 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.548 10:52:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:57.548 10:52:11 rpc -- rpc/rpc.sh@84 -- # killprocess 115842 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@948 -- # '[' -z 115842 ']' 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@952 -- # kill -0 115842 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@953 -- # uname 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115842 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115842' 00:05:57.548 killing process with pid 115842 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@967 -- # kill 115842 00:05:57.548 10:52:11 rpc -- common/autotest_common.sh@972 -- # wait 115842 00:05:57.808 00:05:57.808 real 0m1.797s 00:05:57.808 user 0m2.231s 00:05:57.808 sys 0m0.573s 00:05:57.808 10:52:12 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.808 10:52:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.808 ************************************ 00:05:57.808 END TEST rpc 00:05:57.808 ************************************ 00:05:57.808 10:52:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.808 10:52:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:57.808 10:52:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.808 10:52:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.808 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:05:57.808 ************************************ 00:05:57.808 START TEST skip_rpc 00:05:57.808 ************************************ 00:05:57.808 10:52:12 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.068 * Looking for test storage... 00:05:58.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:58.068 10:52:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.068 10:52:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:58.068 10:52:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.068 10:52:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.068 10:52:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.068 10:52:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.068 ************************************ 00:05:58.068 START TEST skip_rpc 00:05:58.068 ************************************ 00:05:58.068 10:52:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:58.068 10:52:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=116278 00:05:58.068 10:52:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.068 10:52:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.068 10:52:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.068 [2024-07-11 10:52:12.336789] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:05:58.068 [2024-07-11 10:52:12.336854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116278 ] 00:05:58.068 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.068 [2024-07-11 10:52:12.392184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.068 [2024-07-11 10:52:12.480179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 116278 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 116278 ']' 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 116278 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116278 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116278' 00:06:03.351 killing process with pid 116278 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 116278 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 116278 00:06:03.351 00:06:03.351 real 0m5.412s 00:06:03.351 user 0m5.118s 00:06:03.351 sys 0m0.303s 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.351 10:52:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.351 ************************************ 00:06:03.351 END TEST skip_rpc 00:06:03.351 ************************************ 00:06:03.351 10:52:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.351 10:52:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:03.351 10:52:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.351 10:52:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.351 10:52:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.351 ************************************ 00:06:03.351 START TEST skip_rpc_with_json 00:06:03.351 ************************************ 00:06:03.351 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:03.351 10:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:03.351 10:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=116963 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 116963 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 116963 ']' 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.352 10:52:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.611 [2024-07-11 10:52:17.800140] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:03.611 [2024-07-11 10:52:17.800224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116963 ] 00:06:03.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.611 [2024-07-11 10:52:17.857241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.611 [2024-07-11 10:52:17.935523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.870 [2024-07-11 10:52:18.180015] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:03.870 request: 00:06:03.870 { 00:06:03.870 "trtype": "tcp", 00:06:03.870 "method": "nvmf_get_transports", 00:06:03.870 "req_id": 1 00:06:03.870 } 00:06:03.870 Got JSON-RPC error response 00:06:03.870 response: 00:06:03.870 { 00:06:03.870 "code": -19, 00:06:03.870 "message": "No such device" 00:06:03.870 } 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.870 [2024-07-11 10:52:18.188153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.870 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.129 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.129 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.129 { 00:06:04.129 "subsystems": [ 00:06:04.129 { 00:06:04.129 "subsystem": "vfio_user_target", 00:06:04.129 "config": null 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "subsystem": "keyring", 00:06:04.129 "config": [] 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "subsystem": "iobuf", 00:06:04.129 "config": [ 00:06:04.129 { 00:06:04.129 "method": "iobuf_set_options", 00:06:04.129 "params": { 00:06:04.129 "small_pool_count": 8192, 00:06:04.129 "large_pool_count": 1024, 00:06:04.129 "small_bufsize": 8192, 00:06:04.129 "large_bufsize": 135168 00:06:04.129 } 00:06:04.129 } 00:06:04.129 ] 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "subsystem": "sock", 00:06:04.129 "config": [ 00:06:04.129 { 00:06:04.129 "method": "sock_set_default_impl", 00:06:04.129 "params": { 00:06:04.129 "impl_name": "posix" 00:06:04.129 } 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "method": "sock_impl_set_options", 00:06:04.129 "params": { 00:06:04.129 "impl_name": "ssl", 00:06:04.129 "recv_buf_size": 4096, 00:06:04.129 "send_buf_size": 4096, 00:06:04.129 "enable_recv_pipe": true, 00:06:04.129 "enable_quickack": false, 00:06:04.129 "enable_placement_id": 0, 00:06:04.129 "enable_zerocopy_send_server": true, 00:06:04.129 "enable_zerocopy_send_client": false, 00:06:04.129 "zerocopy_threshold": 0, 00:06:04.129 "tls_version": 0, 00:06:04.129 "enable_ktls": false 00:06:04.129 } 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "method": "sock_impl_set_options", 00:06:04.129 "params": { 00:06:04.129 "impl_name": "posix", 00:06:04.129 "recv_buf_size": 2097152, 00:06:04.129 "send_buf_size": 2097152, 00:06:04.129 "enable_recv_pipe": true, 00:06:04.129 "enable_quickack": false, 00:06:04.129 "enable_placement_id": 0, 00:06:04.129 "enable_zerocopy_send_server": true, 00:06:04.129 "enable_zerocopy_send_client": false, 00:06:04.129 "zerocopy_threshold": 0, 00:06:04.129 "tls_version": 0, 00:06:04.129 "enable_ktls": false 00:06:04.129 } 00:06:04.129 } 00:06:04.129 ] 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "subsystem": "vmd", 00:06:04.129 "config": [] 00:06:04.129 }, 00:06:04.129 { 00:06:04.129 "subsystem": "accel", 00:06:04.129 "config": [ 00:06:04.129 { 00:06:04.129 "method": "accel_set_options", 00:06:04.129 "params": { 00:06:04.129 "small_cache_size": 128, 00:06:04.129 "large_cache_size": 16, 00:06:04.129 "task_count": 2048, 00:06:04.129 "sequence_count": 2048, 00:06:04.129 "buf_count": 2048 00:06:04.129 } 00:06:04.129 } 00:06:04.130 ] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "bdev", 00:06:04.130 "config": [ 00:06:04.130 { 00:06:04.130 "method": "bdev_set_options", 00:06:04.130 "params": { 00:06:04.130 "bdev_io_pool_size": 65535, 00:06:04.130 "bdev_io_cache_size": 256, 00:06:04.130 "bdev_auto_examine": true, 00:06:04.130 "iobuf_small_cache_size": 128, 00:06:04.130 "iobuf_large_cache_size": 16 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "bdev_raid_set_options", 00:06:04.130 "params": { 00:06:04.130 "process_window_size_kb": 1024 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "bdev_iscsi_set_options", 00:06:04.130 "params": { 00:06:04.130 "timeout_sec": 30 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "bdev_nvme_set_options", 00:06:04.130 "params": { 00:06:04.130 "action_on_timeout": "none", 00:06:04.130 "timeout_us": 0, 00:06:04.130 "timeout_admin_us": 0, 00:06:04.130 "keep_alive_timeout_ms": 10000, 00:06:04.130 "arbitration_burst": 0, 00:06:04.130 "low_priority_weight": 0, 00:06:04.130 "medium_priority_weight": 0, 00:06:04.130 "high_priority_weight": 0, 00:06:04.130 "nvme_adminq_poll_period_us": 10000, 00:06:04.130 "nvme_ioq_poll_period_us": 0, 00:06:04.130 "io_queue_requests": 0, 00:06:04.130 "delay_cmd_submit": true, 00:06:04.130 "transport_retry_count": 4, 00:06:04.130 "bdev_retry_count": 3, 00:06:04.130 "transport_ack_timeout": 0, 00:06:04.130 "ctrlr_loss_timeout_sec": 0, 00:06:04.130 "reconnect_delay_sec": 0, 00:06:04.130 "fast_io_fail_timeout_sec": 0, 00:06:04.130 "disable_auto_failback": false, 00:06:04.130 "generate_uuids": false, 00:06:04.130 "transport_tos": 0, 00:06:04.130 "nvme_error_stat": false, 00:06:04.130 "rdma_srq_size": 0, 00:06:04.130 "io_path_stat": false, 00:06:04.130 "allow_accel_sequence": false, 00:06:04.130 "rdma_max_cq_size": 0, 00:06:04.130 "rdma_cm_event_timeout_ms": 0, 00:06:04.130 "dhchap_digests": [ 00:06:04.130 "sha256", 00:06:04.130 "sha384", 00:06:04.130 "sha512" 00:06:04.130 ], 00:06:04.130 "dhchap_dhgroups": [ 00:06:04.130 "null", 00:06:04.130 "ffdhe2048", 00:06:04.130 "ffdhe3072", 00:06:04.130 "ffdhe4096", 00:06:04.130 "ffdhe6144", 00:06:04.130 "ffdhe8192" 00:06:04.130 ] 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "bdev_nvme_set_hotplug", 00:06:04.130 "params": { 00:06:04.130 "period_us": 100000, 00:06:04.130 "enable": false 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "bdev_wait_for_examine" 00:06:04.130 } 00:06:04.130 ] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "scsi", 00:06:04.130 "config": null 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "scheduler", 00:06:04.130 "config": [ 00:06:04.130 { 00:06:04.130 "method": "framework_set_scheduler", 00:06:04.130 "params": { 00:06:04.130 "name": "static" 00:06:04.130 } 00:06:04.130 } 00:06:04.130 ] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "vhost_scsi", 00:06:04.130 "config": [] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "vhost_blk", 00:06:04.130 "config": [] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "ublk", 00:06:04.130 "config": [] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "nbd", 00:06:04.130 "config": [] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "nvmf", 00:06:04.130 "config": [ 00:06:04.130 { 00:06:04.130 "method": "nvmf_set_config", 00:06:04.130 "params": { 00:06:04.130 "discovery_filter": "match_any", 00:06:04.130 "admin_cmd_passthru": { 00:06:04.130 "identify_ctrlr": false 00:06:04.130 } 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "nvmf_set_max_subsystems", 00:06:04.130 "params": { 00:06:04.130 "max_subsystems": 1024 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "nvmf_set_crdt", 00:06:04.130 "params": { 00:06:04.130 "crdt1": 0, 00:06:04.130 "crdt2": 0, 00:06:04.130 "crdt3": 0 00:06:04.130 } 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "method": "nvmf_create_transport", 00:06:04.130 "params": { 00:06:04.130 "trtype": "TCP", 00:06:04.130 "max_queue_depth": 128, 00:06:04.130 "max_io_qpairs_per_ctrlr": 127, 00:06:04.130 "in_capsule_data_size": 4096, 00:06:04.130 "max_io_size": 131072, 00:06:04.130 "io_unit_size": 131072, 00:06:04.130 "max_aq_depth": 128, 00:06:04.130 "num_shared_buffers": 511, 00:06:04.130 "buf_cache_size": 4294967295, 00:06:04.130 "dif_insert_or_strip": false, 00:06:04.130 "zcopy": false, 00:06:04.130 "c2h_success": true, 00:06:04.130 "sock_priority": 0, 00:06:04.130 "abort_timeout_sec": 1, 00:06:04.130 "ack_timeout": 0, 00:06:04.130 "data_wr_pool_size": 0 00:06:04.130 } 00:06:04.130 } 00:06:04.130 ] 00:06:04.130 }, 00:06:04.130 { 00:06:04.130 "subsystem": "iscsi", 00:06:04.130 "config": [ 00:06:04.130 { 00:06:04.130 "method": "iscsi_set_options", 00:06:04.130 "params": { 00:06:04.130 "node_base": "iqn.2016-06.io.spdk", 00:06:04.130 "max_sessions": 128, 00:06:04.130 "max_connections_per_session": 2, 00:06:04.130 "max_queue_depth": 64, 00:06:04.130 "default_time2wait": 2, 00:06:04.130 "default_time2retain": 20, 00:06:04.130 "first_burst_length": 8192, 00:06:04.130 "immediate_data": true, 00:06:04.130 "allow_duplicated_isid": false, 00:06:04.130 "error_recovery_level": 0, 00:06:04.130 "nop_timeout": 60, 00:06:04.130 "nop_in_interval": 30, 00:06:04.130 "disable_chap": false, 00:06:04.130 "require_chap": false, 00:06:04.130 "mutual_chap": false, 00:06:04.130 "chap_group": 0, 00:06:04.130 "max_large_datain_per_connection": 64, 00:06:04.130 "max_r2t_per_connection": 4, 00:06:04.130 "pdu_pool_size": 36864, 00:06:04.130 "immediate_data_pool_size": 16384, 00:06:04.130 "data_out_pool_size": 2048 00:06:04.130 } 00:06:04.130 } 00:06:04.130 ] 00:06:04.130 } 00:06:04.130 ] 00:06:04.130 } 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 116963 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 116963 ']' 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 116963 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116963 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116963' 00:06:04.130 killing process with pid 116963 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 116963 00:06:04.130 10:52:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 116963 00:06:04.391 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=117105 00:06:04.391 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.391 10:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 117105 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 117105 ']' 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 117105 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117105 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117105' 00:06:09.659 killing process with pid 117105 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 117105 00:06:09.659 10:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 117105 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:09.917 00:06:09.917 real 0m6.434s 00:06:09.917 user 0m6.059s 00:06:09.917 sys 0m0.651s 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.917 ************************************ 00:06:09.917 END TEST skip_rpc_with_json 00:06:09.917 ************************************ 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.917 10:52:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.917 ************************************ 00:06:09.917 START TEST skip_rpc_with_delay 00:06:09.917 ************************************ 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.917 [2024-07-11 10:52:24.285819] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:09.917 [2024-07-11 10:52:24.285935] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.917 00:06:09.917 real 0m0.068s 00:06:09.917 user 0m0.043s 00:06:09.917 sys 0m0.025s 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.917 10:52:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:09.917 ************************************ 00:06:09.917 END TEST skip_rpc_with_delay 00:06:09.917 ************************************ 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.917 10:52:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:09.917 10:52:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:09.917 10:52:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.917 10:52:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.176 ************************************ 00:06:10.176 START TEST exit_on_failed_rpc_init 00:06:10.176 ************************************ 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=117817 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 117817 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 117817 ']' 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.176 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.176 [2024-07-11 10:52:24.401371] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:10.176 [2024-07-11 10:52:24.401467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117817 ] 00:06:10.176 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.176 [2024-07-11 10:52:24.461019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.176 [2024-07-11 10:52:24.549097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.435 10:52:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.435 [2024-07-11 10:52:24.836257] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:10.435 [2024-07-11 10:52:24.836351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117828 ] 00:06:10.695 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.695 [2024-07-11 10:52:24.894867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.695 [2024-07-11 10:52:24.983231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.695 [2024-07-11 10:52:24.983356] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:10.695 [2024-07-11 10:52:24.983375] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:10.695 [2024-07-11 10:52:24.983386] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 117817 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 117817 ']' 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 117817 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117817 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117817' 00:06:10.695 killing process with pid 117817 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 117817 00:06:10.695 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 117817 00:06:11.262 00:06:11.262 real 0m1.122s 00:06:11.262 user 0m1.228s 00:06:11.262 sys 0m0.426s 00:06:11.262 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.262 10:52:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 ************************************ 00:06:11.262 END TEST exit_on_failed_rpc_init 00:06:11.262 ************************************ 00:06:11.262 10:52:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:11.262 10:52:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:11.262 00:06:11.262 real 0m13.291s 00:06:11.262 user 0m12.554s 00:06:11.262 sys 0m1.570s 00:06:11.262 10:52:25 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.262 10:52:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 ************************************ 00:06:11.262 END TEST skip_rpc 00:06:11.262 ************************************ 00:06:11.262 10:52:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.262 10:52:25 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.262 10:52:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.262 10:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.262 10:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.262 ************************************ 00:06:11.262 START TEST rpc_client 00:06:11.262 ************************************ 00:06:11.262 10:52:25 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.262 * Looking for test storage... 00:06:11.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:11.263 10:52:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:11.263 OK 00:06:11.263 10:52:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:11.263 00:06:11.263 real 0m0.071s 00:06:11.263 user 0m0.033s 00:06:11.263 sys 0m0.043s 00:06:11.263 10:52:25 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.263 10:52:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:11.263 ************************************ 00:06:11.263 END TEST rpc_client 00:06:11.263 ************************************ 00:06:11.263 10:52:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.263 10:52:25 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.263 10:52:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.263 10:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.263 10:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.263 ************************************ 00:06:11.263 START TEST json_config 00:06:11.263 ************************************ 00:06:11.263 10:52:25 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.523 10:52:25 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.523 10:52:25 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.523 10:52:25 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.523 10:52:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.523 10:52:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.523 10:52:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.523 10:52:25 json_config -- paths/export.sh@5 -- # export PATH 00:06:11.523 10:52:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@47 -- # : 0 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.523 10:52:25 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:11.523 INFO: JSON configuration test init 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:11.523 10:52:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.523 10:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:11.523 10:52:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.523 10:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.523 10:52:25 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:11.523 10:52:25 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.523 10:52:25 json_config -- json_config/common.sh@10 -- # shift 00:06:11.523 10:52:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.523 10:52:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.523 10:52:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.523 10:52:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.523 10:52:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.523 10:52:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=118070 00:06:11.523 10:52:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:11.523 10:52:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.523 Waiting for target to run... 00:06:11.523 10:52:25 json_config -- json_config/common.sh@25 -- # waitforlisten 118070 /var/tmp/spdk_tgt.sock 00:06:11.524 10:52:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 118070 ']' 00:06:11.524 10:52:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.524 10:52:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.524 10:52:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.524 10:52:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.524 10:52:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.524 [2024-07-11 10:52:25.780498] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:11.524 [2024-07-11 10:52:25.780595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118070 ] 00:06:11.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.783 [2024-07-11 10:52:26.111136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.783 [2024-07-11 10:52:26.166482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.350 10:52:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.350 10:52:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:12.350 10:52:26 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.350 00:06:12.350 10:52:26 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:12.350 10:52:26 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:12.350 10:52:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.350 10:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.350 10:52:26 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:12.350 10:52:26 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:12.350 10:52:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.350 10:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.350 10:52:26 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:12.350 10:52:26 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:12.350 10:52:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:15.636 10:52:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.636 10:52:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:15.636 10:52:29 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:15.636 10:52:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:15.893 10:52:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.893 10:52:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:15.893 10:52:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.893 10:52:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:15.893 10:52:30 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:15.893 10:52:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:16.151 MallocForNvmf0 00:06:16.151 10:52:30 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:16.151 10:52:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:16.414 MallocForNvmf1 00:06:16.414 10:52:30 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:16.414 10:52:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:16.414 [2024-07-11 10:52:30.829341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.674 10:52:30 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:16.674 10:52:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:16.674 10:52:31 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:16.674 10:52:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:16.931 10:52:31 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:16.931 10:52:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:17.189 10:52:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:17.189 10:52:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:17.447 [2024-07-11 10:52:31.784254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:17.447 10:52:31 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:17.447 10:52:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.447 10:52:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.447 10:52:31 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:17.447 10:52:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.447 10:52:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.447 10:52:31 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:17.447 10:52:31 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:17.447 10:52:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:17.705 MallocBdevForConfigChangeCheck 00:06:17.705 10:52:32 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:17.705 10:52:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.705 10:52:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.705 10:52:32 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:17.705 10:52:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.272 10:52:32 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:18.272 INFO: shutting down applications... 00:06:18.272 10:52:32 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:18.272 10:52:32 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:18.272 10:52:32 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:18.272 10:52:32 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:19.665 Calling clear_iscsi_subsystem 00:06:19.665 Calling clear_nvmf_subsystem 00:06:19.665 Calling clear_nbd_subsystem 00:06:19.665 Calling clear_ublk_subsystem 00:06:19.665 Calling clear_vhost_blk_subsystem 00:06:19.665 Calling clear_vhost_scsi_subsystem 00:06:19.665 Calling clear_bdev_subsystem 00:06:19.665 10:52:34 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:19.665 10:52:34 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:19.665 10:52:34 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:19.665 10:52:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.665 10:52:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:19.665 10:52:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:20.229 10:52:34 json_config -- json_config/json_config.sh@345 -- # break 00:06:20.229 10:52:34 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:20.229 10:52:34 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:20.229 10:52:34 json_config -- json_config/common.sh@31 -- # local app=target 00:06:20.229 10:52:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.229 10:52:34 json_config -- json_config/common.sh@35 -- # [[ -n 118070 ]] 00:06:20.229 10:52:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 118070 00:06:20.229 10:52:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.229 10:52:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.229 10:52:34 json_config -- json_config/common.sh@41 -- # kill -0 118070 00:06:20.229 10:52:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.799 10:52:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.799 10:52:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.799 10:52:34 json_config -- json_config/common.sh@41 -- # kill -0 118070 00:06:20.799 10:52:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.799 10:52:34 json_config -- json_config/common.sh@43 -- # break 00:06:20.799 10:52:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.799 10:52:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.799 SPDK target shutdown done 00:06:20.799 10:52:34 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:20.799 INFO: relaunching applications... 00:06:20.799 10:52:34 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.799 10:52:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.799 10:52:34 json_config -- json_config/common.sh@10 -- # shift 00:06:20.799 10:52:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.799 10:52:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.799 10:52:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.799 10:52:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.799 10:52:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.799 10:52:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=119257 00:06:20.799 10:52:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.799 10:52:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.799 Waiting for target to run... 00:06:20.799 10:52:34 json_config -- json_config/common.sh@25 -- # waitforlisten 119257 /var/tmp/spdk_tgt.sock 00:06:20.799 10:52:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 119257 ']' 00:06:20.799 10:52:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.799 10:52:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.799 10:52:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.799 10:52:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.799 10:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.799 [2024-07-11 10:52:35.026363] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:20.799 [2024-07-11 10:52:35.026462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119257 ] 00:06:20.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.058 [2024-07-11 10:52:35.368649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.058 [2024-07-11 10:52:35.428185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.342 [2024-07-11 10:52:38.447835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.342 [2024-07-11 10:52:38.480215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:24.342 10:52:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.342 10:52:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:24.342 10:52:38 json_config -- json_config/common.sh@26 -- # echo '' 00:06:24.342 00:06:24.342 10:52:38 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:24.342 10:52:38 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:24.342 INFO: Checking if target configuration is the same... 00:06:24.342 10:52:38 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.342 10:52:38 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:24.342 10:52:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.342 + '[' 2 -ne 2 ']' 00:06:24.342 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.342 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.342 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.342 +++ basename /dev/fd/62 00:06:24.342 ++ mktemp /tmp/62.XXX 00:06:24.342 + tmp_file_1=/tmp/62.WLp 00:06:24.342 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.342 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.342 + tmp_file_2=/tmp/spdk_tgt_config.json.19x 00:06:24.342 + ret=0 00:06:24.342 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.601 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.601 + diff -u /tmp/62.WLp /tmp/spdk_tgt_config.json.19x 00:06:24.601 + echo 'INFO: JSON config files are the same' 00:06:24.601 INFO: JSON config files are the same 00:06:24.601 + rm /tmp/62.WLp /tmp/spdk_tgt_config.json.19x 00:06:24.601 + exit 0 00:06:24.601 10:52:38 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:24.601 10:52:38 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:24.601 INFO: changing configuration and checking if this can be detected... 00:06:24.601 10:52:38 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.601 10:52:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.858 10:52:39 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.858 10:52:39 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:24.858 10:52:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.858 + '[' 2 -ne 2 ']' 00:06:24.858 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.858 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:24.858 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.858 +++ basename /dev/fd/62 00:06:24.858 ++ mktemp /tmp/62.XXX 00:06:24.858 + tmp_file_1=/tmp/62.ONr 00:06:24.858 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.858 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.858 + tmp_file_2=/tmp/spdk_tgt_config.json.GPk 00:06:24.858 + ret=0 00:06:24.858 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.425 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:25.425 + diff -u /tmp/62.ONr /tmp/spdk_tgt_config.json.GPk 00:06:25.425 + ret=1 00:06:25.425 + echo '=== Start of file: /tmp/62.ONr ===' 00:06:25.425 + cat /tmp/62.ONr 00:06:25.425 + echo '=== End of file: /tmp/62.ONr ===' 00:06:25.425 + echo '' 00:06:25.425 + echo '=== Start of file: /tmp/spdk_tgt_config.json.GPk ===' 00:06:25.425 + cat /tmp/spdk_tgt_config.json.GPk 00:06:25.425 + echo '=== End of file: /tmp/spdk_tgt_config.json.GPk ===' 00:06:25.425 + echo '' 00:06:25.425 + rm /tmp/62.ONr /tmp/spdk_tgt_config.json.GPk 00:06:25.425 + exit 1 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:25.425 INFO: configuration change detected. 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@317 -- # [[ -n 119257 ]] 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.425 10:52:39 json_config -- json_config/json_config.sh@323 -- # killprocess 119257 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@948 -- # '[' -z 119257 ']' 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@952 -- # kill -0 119257 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@953 -- # uname 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119257 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119257' 00:06:25.425 killing process with pid 119257 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@967 -- # kill 119257 00:06:25.425 10:52:39 json_config -- common/autotest_common.sh@972 -- # wait 119257 00:06:26.805 10:52:41 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.805 10:52:41 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:26.805 10:52:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.805 10:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.064 10:52:41 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:27.064 10:52:41 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:27.064 INFO: Success 00:06:27.064 00:06:27.064 real 0m15.575s 00:06:27.064 user 0m17.394s 00:06:27.064 sys 0m1.776s 00:06:27.064 10:52:41 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.064 10:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.064 ************************************ 00:06:27.064 END TEST json_config 00:06:27.064 ************************************ 00:06:27.064 10:52:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.064 10:52:41 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.064 10:52:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.064 10:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.064 10:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.064 ************************************ 00:06:27.064 START TEST json_config_extra_key 00:06:27.064 ************************************ 00:06:27.064 10:52:41 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.064 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.064 10:52:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.065 10:52:41 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.065 10:52:41 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.065 10:52:41 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.065 10:52:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.065 10:52:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.065 10:52:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.065 10:52:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:27.065 10:52:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.065 10:52:41 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:27.065 INFO: launching applications... 00:06:27.065 10:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=120163 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.065 Waiting for target to run... 00:06:27.065 10:52:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 120163 /var/tmp/spdk_tgt.sock 00:06:27.065 10:52:41 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 120163 ']' 00:06:27.065 10:52:41 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.065 10:52:41 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.065 10:52:41 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.065 10:52:41 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.065 10:52:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.065 [2024-07-11 10:52:41.394633] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:27.065 [2024-07-11 10:52:41.394713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120163 ] 00:06:27.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.634 [2024-07-11 10:52:41.893550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.634 [2024-07-11 10:52:41.967622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.201 10:52:42 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.201 10:52:42 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:28.201 00:06:28.201 10:52:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:28.201 INFO: shutting down applications... 00:06:28.201 10:52:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 120163 ]] 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 120163 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 120163 00:06:28.201 10:52:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 120163 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:28.462 10:52:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:28.462 SPDK target shutdown done 00:06:28.462 10:52:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:28.462 Success 00:06:28.462 00:06:28.462 real 0m1.558s 00:06:28.462 user 0m1.353s 00:06:28.462 sys 0m0.590s 00:06:28.462 10:52:42 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.462 10:52:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:28.462 ************************************ 00:06:28.462 END TEST json_config_extra_key 00:06:28.462 ************************************ 00:06:28.462 10:52:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.462 10:52:42 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:28.462 10:52:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.462 10:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.462 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.722 ************************************ 00:06:28.722 START TEST alias_rpc 00:06:28.722 ************************************ 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:28.722 * Looking for test storage... 00:06:28.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:28.722 10:52:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.722 10:52:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=120357 00:06:28.722 10:52:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:28.722 10:52:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 120357 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 120357 ']' 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.722 10:52:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.722 [2024-07-11 10:52:43.003116] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:28.722 [2024-07-11 10:52:43.003206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120357 ] 00:06:28.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.722 [2024-07-11 10:52:43.058722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.722 [2024-07-11 10:52:43.140383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.981 10:52:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.981 10:52:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.981 10:52:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:29.240 10:52:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 120357 00:06:29.240 10:52:43 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 120357 ']' 00:06:29.240 10:52:43 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 120357 00:06:29.240 10:52:43 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:29.240 10:52:43 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.240 10:52:43 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120357 00:06:29.498 10:52:43 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.498 10:52:43 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.498 10:52:43 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120357' 00:06:29.498 killing process with pid 120357 00:06:29.498 10:52:43 alias_rpc -- common/autotest_common.sh@967 -- # kill 120357 00:06:29.498 10:52:43 alias_rpc -- common/autotest_common.sh@972 -- # wait 120357 00:06:29.755 00:06:29.755 real 0m1.165s 00:06:29.755 user 0m1.267s 00:06:29.755 sys 0m0.374s 00:06:29.755 10:52:44 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.755 10:52:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 ************************************ 00:06:29.755 END TEST alias_rpc 00:06:29.755 ************************************ 00:06:29.755 10:52:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.755 10:52:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:29.755 10:52:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:29.755 10:52:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.755 10:52:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.755 10:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 ************************************ 00:06:29.755 START TEST spdkcli_tcp 00:06:29.755 ************************************ 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:29.755 * Looking for test storage... 00:06:29.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=120543 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:29.755 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 120543 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 120543 ']' 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.755 10:52:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 [2024-07-11 10:52:44.221788] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:30.012 [2024-07-11 10:52:44.221880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120543 ] 00:06:30.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.012 [2024-07-11 10:52:44.277919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.012 [2024-07-11 10:52:44.362432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.012 [2024-07-11 10:52:44.362435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.270 10:52:44 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.270 10:52:44 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:30.270 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=120662 00:06:30.270 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:30.270 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:30.528 [ 00:06:30.528 "bdev_malloc_delete", 00:06:30.528 "bdev_malloc_create", 00:06:30.528 "bdev_null_resize", 00:06:30.528 "bdev_null_delete", 00:06:30.528 "bdev_null_create", 00:06:30.528 "bdev_nvme_cuse_unregister", 00:06:30.528 "bdev_nvme_cuse_register", 00:06:30.528 "bdev_opal_new_user", 00:06:30.528 "bdev_opal_set_lock_state", 00:06:30.528 "bdev_opal_delete", 00:06:30.528 "bdev_opal_get_info", 00:06:30.528 "bdev_opal_create", 00:06:30.528 "bdev_nvme_opal_revert", 00:06:30.528 "bdev_nvme_opal_init", 00:06:30.528 "bdev_nvme_send_cmd", 00:06:30.528 "bdev_nvme_get_path_iostat", 00:06:30.528 "bdev_nvme_get_mdns_discovery_info", 00:06:30.528 "bdev_nvme_stop_mdns_discovery", 00:06:30.528 "bdev_nvme_start_mdns_discovery", 00:06:30.528 "bdev_nvme_set_multipath_policy", 00:06:30.528 "bdev_nvme_set_preferred_path", 00:06:30.528 "bdev_nvme_get_io_paths", 00:06:30.528 "bdev_nvme_remove_error_injection", 00:06:30.528 "bdev_nvme_add_error_injection", 00:06:30.528 "bdev_nvme_get_discovery_info", 00:06:30.528 "bdev_nvme_stop_discovery", 00:06:30.528 "bdev_nvme_start_discovery", 00:06:30.528 "bdev_nvme_get_controller_health_info", 00:06:30.528 "bdev_nvme_disable_controller", 00:06:30.528 "bdev_nvme_enable_controller", 00:06:30.528 "bdev_nvme_reset_controller", 00:06:30.528 "bdev_nvme_get_transport_statistics", 00:06:30.528 "bdev_nvme_apply_firmware", 00:06:30.528 "bdev_nvme_detach_controller", 00:06:30.528 "bdev_nvme_get_controllers", 00:06:30.528 "bdev_nvme_attach_controller", 00:06:30.528 "bdev_nvme_set_hotplug", 00:06:30.528 "bdev_nvme_set_options", 00:06:30.528 "bdev_passthru_delete", 00:06:30.528 "bdev_passthru_create", 00:06:30.528 "bdev_lvol_set_parent_bdev", 00:06:30.528 "bdev_lvol_set_parent", 00:06:30.528 "bdev_lvol_check_shallow_copy", 00:06:30.528 "bdev_lvol_start_shallow_copy", 00:06:30.528 "bdev_lvol_grow_lvstore", 00:06:30.528 "bdev_lvol_get_lvols", 00:06:30.528 "bdev_lvol_get_lvstores", 00:06:30.528 "bdev_lvol_delete", 00:06:30.528 "bdev_lvol_set_read_only", 00:06:30.528 "bdev_lvol_resize", 00:06:30.528 "bdev_lvol_decouple_parent", 00:06:30.528 "bdev_lvol_inflate", 00:06:30.528 "bdev_lvol_rename", 00:06:30.528 "bdev_lvol_clone_bdev", 00:06:30.528 "bdev_lvol_clone", 00:06:30.528 "bdev_lvol_snapshot", 00:06:30.528 "bdev_lvol_create", 00:06:30.528 "bdev_lvol_delete_lvstore", 00:06:30.528 "bdev_lvol_rename_lvstore", 00:06:30.528 "bdev_lvol_create_lvstore", 00:06:30.528 "bdev_raid_set_options", 00:06:30.528 "bdev_raid_remove_base_bdev", 00:06:30.528 "bdev_raid_add_base_bdev", 00:06:30.528 "bdev_raid_delete", 00:06:30.528 "bdev_raid_create", 00:06:30.528 "bdev_raid_get_bdevs", 00:06:30.528 "bdev_error_inject_error", 00:06:30.528 "bdev_error_delete", 00:06:30.528 "bdev_error_create", 00:06:30.528 "bdev_split_delete", 00:06:30.528 "bdev_split_create", 00:06:30.528 "bdev_delay_delete", 00:06:30.528 "bdev_delay_create", 00:06:30.528 "bdev_delay_update_latency", 00:06:30.528 "bdev_zone_block_delete", 00:06:30.528 "bdev_zone_block_create", 00:06:30.528 "blobfs_create", 00:06:30.528 "blobfs_detect", 00:06:30.528 "blobfs_set_cache_size", 00:06:30.528 "bdev_aio_delete", 00:06:30.528 "bdev_aio_rescan", 00:06:30.528 "bdev_aio_create", 00:06:30.528 "bdev_ftl_set_property", 00:06:30.528 "bdev_ftl_get_properties", 00:06:30.528 "bdev_ftl_get_stats", 00:06:30.528 "bdev_ftl_unmap", 00:06:30.528 "bdev_ftl_unload", 00:06:30.528 "bdev_ftl_delete", 00:06:30.528 "bdev_ftl_load", 00:06:30.528 "bdev_ftl_create", 00:06:30.528 "bdev_virtio_attach_controller", 00:06:30.528 "bdev_virtio_scsi_get_devices", 00:06:30.528 "bdev_virtio_detach_controller", 00:06:30.528 "bdev_virtio_blk_set_hotplug", 00:06:30.528 "bdev_iscsi_delete", 00:06:30.528 "bdev_iscsi_create", 00:06:30.528 "bdev_iscsi_set_options", 00:06:30.528 "accel_error_inject_error", 00:06:30.528 "ioat_scan_accel_module", 00:06:30.528 "dsa_scan_accel_module", 00:06:30.528 "iaa_scan_accel_module", 00:06:30.528 "vfu_virtio_create_scsi_endpoint", 00:06:30.528 "vfu_virtio_scsi_remove_target", 00:06:30.528 "vfu_virtio_scsi_add_target", 00:06:30.528 "vfu_virtio_create_blk_endpoint", 00:06:30.528 "vfu_virtio_delete_endpoint", 00:06:30.528 "keyring_file_remove_key", 00:06:30.528 "keyring_file_add_key", 00:06:30.528 "keyring_linux_set_options", 00:06:30.528 "iscsi_get_histogram", 00:06:30.528 "iscsi_enable_histogram", 00:06:30.528 "iscsi_set_options", 00:06:30.528 "iscsi_get_auth_groups", 00:06:30.528 "iscsi_auth_group_remove_secret", 00:06:30.528 "iscsi_auth_group_add_secret", 00:06:30.528 "iscsi_delete_auth_group", 00:06:30.529 "iscsi_create_auth_group", 00:06:30.529 "iscsi_set_discovery_auth", 00:06:30.529 "iscsi_get_options", 00:06:30.529 "iscsi_target_node_request_logout", 00:06:30.529 "iscsi_target_node_set_redirect", 00:06:30.529 "iscsi_target_node_set_auth", 00:06:30.529 "iscsi_target_node_add_lun", 00:06:30.529 "iscsi_get_stats", 00:06:30.529 "iscsi_get_connections", 00:06:30.529 "iscsi_portal_group_set_auth", 00:06:30.529 "iscsi_start_portal_group", 00:06:30.529 "iscsi_delete_portal_group", 00:06:30.529 "iscsi_create_portal_group", 00:06:30.529 "iscsi_get_portal_groups", 00:06:30.529 "iscsi_delete_target_node", 00:06:30.529 "iscsi_target_node_remove_pg_ig_maps", 00:06:30.529 "iscsi_target_node_add_pg_ig_maps", 00:06:30.529 "iscsi_create_target_node", 00:06:30.529 "iscsi_get_target_nodes", 00:06:30.529 "iscsi_delete_initiator_group", 00:06:30.529 "iscsi_initiator_group_remove_initiators", 00:06:30.529 "iscsi_initiator_group_add_initiators", 00:06:30.529 "iscsi_create_initiator_group", 00:06:30.529 "iscsi_get_initiator_groups", 00:06:30.529 "nvmf_set_crdt", 00:06:30.529 "nvmf_set_config", 00:06:30.529 "nvmf_set_max_subsystems", 00:06:30.529 "nvmf_stop_mdns_prr", 00:06:30.529 "nvmf_publish_mdns_prr", 00:06:30.529 "nvmf_subsystem_get_listeners", 00:06:30.529 "nvmf_subsystem_get_qpairs", 00:06:30.529 "nvmf_subsystem_get_controllers", 00:06:30.529 "nvmf_get_stats", 00:06:30.529 "nvmf_get_transports", 00:06:30.529 "nvmf_create_transport", 00:06:30.529 "nvmf_get_targets", 00:06:30.529 "nvmf_delete_target", 00:06:30.529 "nvmf_create_target", 00:06:30.529 "nvmf_subsystem_allow_any_host", 00:06:30.529 "nvmf_subsystem_remove_host", 00:06:30.529 "nvmf_subsystem_add_host", 00:06:30.529 "nvmf_ns_remove_host", 00:06:30.529 "nvmf_ns_add_host", 00:06:30.529 "nvmf_subsystem_remove_ns", 00:06:30.529 "nvmf_subsystem_add_ns", 00:06:30.529 "nvmf_subsystem_listener_set_ana_state", 00:06:30.529 "nvmf_discovery_get_referrals", 00:06:30.529 "nvmf_discovery_remove_referral", 00:06:30.529 "nvmf_discovery_add_referral", 00:06:30.529 "nvmf_subsystem_remove_listener", 00:06:30.529 "nvmf_subsystem_add_listener", 00:06:30.529 "nvmf_delete_subsystem", 00:06:30.529 "nvmf_create_subsystem", 00:06:30.529 "nvmf_get_subsystems", 00:06:30.529 "env_dpdk_get_mem_stats", 00:06:30.529 "nbd_get_disks", 00:06:30.529 "nbd_stop_disk", 00:06:30.529 "nbd_start_disk", 00:06:30.529 "ublk_recover_disk", 00:06:30.529 "ublk_get_disks", 00:06:30.529 "ublk_stop_disk", 00:06:30.529 "ublk_start_disk", 00:06:30.529 "ublk_destroy_target", 00:06:30.529 "ublk_create_target", 00:06:30.529 "virtio_blk_create_transport", 00:06:30.529 "virtio_blk_get_transports", 00:06:30.529 "vhost_controller_set_coalescing", 00:06:30.529 "vhost_get_controllers", 00:06:30.529 "vhost_delete_controller", 00:06:30.529 "vhost_create_blk_controller", 00:06:30.529 "vhost_scsi_controller_remove_target", 00:06:30.529 "vhost_scsi_controller_add_target", 00:06:30.529 "vhost_start_scsi_controller", 00:06:30.529 "vhost_create_scsi_controller", 00:06:30.529 "thread_set_cpumask", 00:06:30.529 "framework_get_governor", 00:06:30.529 "framework_get_scheduler", 00:06:30.529 "framework_set_scheduler", 00:06:30.529 "framework_get_reactors", 00:06:30.529 "thread_get_io_channels", 00:06:30.529 "thread_get_pollers", 00:06:30.529 "thread_get_stats", 00:06:30.529 "framework_monitor_context_switch", 00:06:30.529 "spdk_kill_instance", 00:06:30.529 "log_enable_timestamps", 00:06:30.529 "log_get_flags", 00:06:30.529 "log_clear_flag", 00:06:30.529 "log_set_flag", 00:06:30.529 "log_get_level", 00:06:30.529 "log_set_level", 00:06:30.529 "log_get_print_level", 00:06:30.529 "log_set_print_level", 00:06:30.529 "framework_enable_cpumask_locks", 00:06:30.529 "framework_disable_cpumask_locks", 00:06:30.529 "framework_wait_init", 00:06:30.529 "framework_start_init", 00:06:30.529 "scsi_get_devices", 00:06:30.529 "bdev_get_histogram", 00:06:30.529 "bdev_enable_histogram", 00:06:30.529 "bdev_set_qos_limit", 00:06:30.529 "bdev_set_qd_sampling_period", 00:06:30.529 "bdev_get_bdevs", 00:06:30.529 "bdev_reset_iostat", 00:06:30.529 "bdev_get_iostat", 00:06:30.529 "bdev_examine", 00:06:30.529 "bdev_wait_for_examine", 00:06:30.529 "bdev_set_options", 00:06:30.529 "notify_get_notifications", 00:06:30.529 "notify_get_types", 00:06:30.529 "accel_get_stats", 00:06:30.529 "accel_set_options", 00:06:30.529 "accel_set_driver", 00:06:30.529 "accel_crypto_key_destroy", 00:06:30.529 "accel_crypto_keys_get", 00:06:30.529 "accel_crypto_key_create", 00:06:30.529 "accel_assign_opc", 00:06:30.529 "accel_get_module_info", 00:06:30.529 "accel_get_opc_assignments", 00:06:30.529 "vmd_rescan", 00:06:30.529 "vmd_remove_device", 00:06:30.529 "vmd_enable", 00:06:30.529 "sock_get_default_impl", 00:06:30.529 "sock_set_default_impl", 00:06:30.529 "sock_impl_set_options", 00:06:30.529 "sock_impl_get_options", 00:06:30.529 "iobuf_get_stats", 00:06:30.529 "iobuf_set_options", 00:06:30.529 "keyring_get_keys", 00:06:30.529 "framework_get_pci_devices", 00:06:30.529 "framework_get_config", 00:06:30.529 "framework_get_subsystems", 00:06:30.529 "vfu_tgt_set_base_path", 00:06:30.529 "trace_get_info", 00:06:30.529 "trace_get_tpoint_group_mask", 00:06:30.529 "trace_disable_tpoint_group", 00:06:30.529 "trace_enable_tpoint_group", 00:06:30.529 "trace_clear_tpoint_mask", 00:06:30.529 "trace_set_tpoint_mask", 00:06:30.529 "spdk_get_version", 00:06:30.529 "rpc_get_methods" 00:06:30.529 ] 00:06:30.529 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.529 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:30.529 10:52:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 120543 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 120543 ']' 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 120543 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120543 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120543' 00:06:30.529 killing process with pid 120543 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 120543 00:06:30.529 10:52:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 120543 00:06:31.096 00:06:31.096 real 0m1.177s 00:06:31.096 user 0m2.116s 00:06:31.096 sys 0m0.442s 00:06:31.096 10:52:45 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.096 10:52:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.096 ************************************ 00:06:31.096 END TEST spdkcli_tcp 00:06:31.096 ************************************ 00:06:31.096 10:52:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.096 10:52:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:31.096 10:52:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.096 10:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.096 10:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.096 ************************************ 00:06:31.096 START TEST dpdk_mem_utility 00:06:31.096 ************************************ 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:31.096 * Looking for test storage... 00:06:31.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:31.096 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:31.096 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=120745 00:06:31.096 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.096 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 120745 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 120745 ']' 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.096 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.096 [2024-07-11 10:52:45.447354] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:31.096 [2024-07-11 10:52:45.447435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120745 ] 00:06:31.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.096 [2024-07-11 10:52:45.503719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.355 [2024-07-11 10:52:45.590763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.614 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.614 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:31.614 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:31.614 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:31.614 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.614 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.614 { 00:06:31.614 "filename": "/tmp/spdk_mem_dump.txt" 00:06:31.614 } 00:06:31.614 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.614 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:31.614 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:31.614 1 heaps totaling size 814.000000 MiB 00:06:31.614 size: 814.000000 MiB heap id: 0 00:06:31.614 end heaps---------- 00:06:31.614 8 mempools totaling size 598.116089 MiB 00:06:31.614 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:31.614 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:31.614 size: 84.521057 MiB name: bdev_io_120745 00:06:31.614 size: 51.011292 MiB name: evtpool_120745 00:06:31.614 size: 50.003479 MiB name: msgpool_120745 00:06:31.614 size: 21.763794 MiB name: PDU_Pool 00:06:31.614 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:31.614 size: 0.026123 MiB name: Session_Pool 00:06:31.614 end mempools------- 00:06:31.614 6 memzones totaling size 4.142822 MiB 00:06:31.614 size: 1.000366 MiB name: RG_ring_0_120745 00:06:31.614 size: 1.000366 MiB name: RG_ring_1_120745 00:06:31.614 size: 1.000366 MiB name: RG_ring_4_120745 00:06:31.614 size: 1.000366 MiB name: RG_ring_5_120745 00:06:31.614 size: 0.125366 MiB name: RG_ring_2_120745 00:06:31.614 size: 0.015991 MiB name: RG_ring_3_120745 00:06:31.614 end memzones------- 00:06:31.614 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:31.614 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:31.614 list of free elements. size: 12.519348 MiB 00:06:31.614 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:31.614 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:31.614 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:31.614 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:31.614 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:31.614 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:31.614 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:31.614 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:31.614 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:31.614 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:31.614 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:31.614 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:31.614 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:31.614 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:31.614 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:31.614 list of standard malloc elements. size: 199.218079 MiB 00:06:31.614 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:31.614 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:31.614 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:31.614 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:31.614 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:31.614 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:31.614 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:31.614 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:31.614 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:31.614 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:31.614 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:31.614 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:31.614 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:31.614 list of memzone associated elements. size: 602.262573 MiB 00:06:31.614 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:31.614 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:31.614 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:31.614 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:31.614 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:31.614 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_120745_0 00:06:31.614 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:31.614 associated memzone info: size: 48.002930 MiB name: MP_evtpool_120745_0 00:06:31.614 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:31.614 associated memzone info: size: 48.002930 MiB name: MP_msgpool_120745_0 00:06:31.614 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:31.614 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:31.614 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:31.614 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:31.614 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:31.614 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_120745 00:06:31.614 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:31.614 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_120745 00:06:31.614 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:31.614 associated memzone info: size: 1.007996 MiB name: MP_evtpool_120745 00:06:31.614 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:31.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:31.614 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:31.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:31.614 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:31.614 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:31.614 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:31.614 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:31.614 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:31.614 associated memzone info: size: 1.000366 MiB name: RG_ring_0_120745 00:06:31.614 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:31.614 associated memzone info: size: 1.000366 MiB name: RG_ring_1_120745 00:06:31.614 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:31.614 associated memzone info: size: 1.000366 MiB name: RG_ring_4_120745 00:06:31.614 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:31.614 associated memzone info: size: 1.000366 MiB name: RG_ring_5_120745 00:06:31.614 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:31.614 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_120745 00:06:31.614 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:31.614 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:31.614 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:31.614 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:31.614 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:31.614 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:31.614 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:31.614 associated memzone info: size: 0.125366 MiB name: RG_ring_2_120745 00:06:31.614 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:31.614 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:31.614 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:31.614 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:31.614 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:31.614 associated memzone info: size: 0.015991 MiB name: RG_ring_3_120745 00:06:31.614 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:31.614 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:31.614 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:31.614 associated memzone info: size: 0.000183 MiB name: MP_msgpool_120745 00:06:31.614 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:31.614 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_120745 00:06:31.614 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:31.614 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:31.614 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:31.614 10:52:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 120745 00:06:31.614 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 120745 ']' 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 120745 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120745 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120745' 00:06:31.615 killing process with pid 120745 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 120745 00:06:31.615 10:52:45 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 120745 00:06:32.181 00:06:32.181 real 0m1.007s 00:06:32.181 user 0m0.980s 00:06:32.181 sys 0m0.397s 00:06:32.181 10:52:46 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.181 10:52:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.181 ************************************ 00:06:32.181 END TEST dpdk_mem_utility 00:06:32.181 ************************************ 00:06:32.181 10:52:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:32.181 10:52:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:32.181 10:52:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.181 10:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.181 10:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.181 ************************************ 00:06:32.181 START TEST event 00:06:32.181 ************************************ 00:06:32.181 10:52:46 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:32.181 * Looking for test storage... 00:06:32.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:32.181 10:52:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:32.181 10:52:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:32.181 10:52:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:32.181 10:52:46 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:32.181 10:52:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.181 10:52:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.181 ************************************ 00:06:32.181 START TEST event_perf 00:06:32.181 ************************************ 00:06:32.181 10:52:46 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:32.181 Running I/O for 1 seconds...[2024-07-11 10:52:46.495182] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:32.182 [2024-07-11 10:52:46.495247] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120934 ] 00:06:32.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.182 [2024-07-11 10:52:46.552181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.441 [2024-07-11 10:52:46.637459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.441 [2024-07-11 10:52:46.637534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.441 [2024-07-11 10:52:46.637648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.441 [2024-07-11 10:52:46.637650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.376 Running I/O for 1 seconds... 00:06:33.376 lcore 0: 236994 00:06:33.376 lcore 1: 236993 00:06:33.376 lcore 2: 236994 00:06:33.376 lcore 3: 236996 00:06:33.376 done. 00:06:33.376 00:06:33.376 real 0m1.233s 00:06:33.377 user 0m4.149s 00:06:33.377 sys 0m0.079s 00:06:33.377 10:52:47 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.377 10:52:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 ************************************ 00:06:33.377 END TEST event_perf 00:06:33.377 ************************************ 00:06:33.377 10:52:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:33.377 10:52:47 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:33.377 10:52:47 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.377 10:52:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.377 10:52:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 ************************************ 00:06:33.377 START TEST event_reactor 00:06:33.377 ************************************ 00:06:33.377 10:52:47 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:33.377 [2024-07-11 10:52:47.773725] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:33.377 [2024-07-11 10:52:47.773803] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121102 ] 00:06:33.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.635 [2024-07-11 10:52:47.831908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.635 [2024-07-11 10:52:47.915424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.568 test_start 00:06:34.568 oneshot 00:06:34.568 tick 100 00:06:34.568 tick 100 00:06:34.568 tick 250 00:06:34.568 tick 100 00:06:34.568 tick 100 00:06:34.568 tick 100 00:06:34.568 tick 250 00:06:34.568 tick 500 00:06:34.568 tick 100 00:06:34.568 tick 100 00:06:34.568 tick 250 00:06:34.568 tick 100 00:06:34.568 tick 100 00:06:34.568 test_end 00:06:34.568 00:06:34.568 real 0m1.226s 00:06:34.568 user 0m1.147s 00:06:34.568 sys 0m0.075s 00:06:34.568 10:52:48 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.568 10:52:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:34.568 ************************************ 00:06:34.568 END TEST event_reactor 00:06:34.568 ************************************ 00:06:34.826 10:52:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:34.826 10:52:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.826 10:52:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:34.826 10:52:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.826 10:52:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.826 ************************************ 00:06:34.826 START TEST event_reactor_perf 00:06:34.826 ************************************ 00:06:34.826 10:52:49 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.826 [2024-07-11 10:52:49.046893] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:34.826 [2024-07-11 10:52:49.046956] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121364 ] 00:06:34.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.826 [2024-07-11 10:52:49.104199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.826 [2024-07-11 10:52:49.187786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.197 test_start 00:06:36.197 test_end 00:06:36.197 Performance: 444862 events per second 00:06:36.197 00:06:36.197 real 0m1.228s 00:06:36.197 user 0m1.148s 00:06:36.197 sys 0m0.075s 00:06:36.197 10:52:50 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.197 10:52:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 ************************************ 00:06:36.197 END TEST event_reactor_perf 00:06:36.197 ************************************ 00:06:36.197 10:52:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.197 10:52:50 event -- event/event.sh@49 -- # uname -s 00:06:36.197 10:52:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:36.197 10:52:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:36.197 10:52:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.197 10:52:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.197 10:52:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 ************************************ 00:06:36.197 START TEST event_scheduler 00:06:36.197 ************************************ 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:36.197 * Looking for test storage... 00:06:36.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=121542 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 121542 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 121542 ']' 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 [2024-07-11 10:52:50.403679] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:36.197 [2024-07-11 10:52:50.403779] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121542 ] 00:06:36.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.197 [2024-07-11 10:52:50.462475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.197 [2024-07-11 10:52:50.551782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.197 [2024-07-11 10:52:50.551809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.197 [2024-07-11 10:52:50.551867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.197 [2024-07-11 10:52:50.551871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 [2024-07-11 10:52:50.608699] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:36.197 [2024-07-11 10:52:50.608725] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:36.197 [2024-07-11 10:52:50.608763] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:36.197 [2024-07-11 10:52:50.608777] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:36.197 [2024-07-11 10:52:50.608787] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.197 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.197 10:52:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.455 [2024-07-11 10:52:50.705383] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:36.456 10:52:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:36.456 10:52:50 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.456 10:52:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 ************************************ 00:06:36.456 START TEST scheduler_create_thread 00:06:36.456 ************************************ 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 2 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 3 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 4 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 5 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 6 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 7 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 8 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 9 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 10 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.456 10:52:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.021 10:52:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.021 00:06:37.021 real 0m0.589s 00:06:37.021 user 0m0.009s 00:06:37.021 sys 0m0.003s 00:06:37.021 10:52:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.021 10:52:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.021 ************************************ 00:06:37.021 END TEST scheduler_create_thread 00:06:37.021 ************************************ 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:37.021 10:52:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.021 10:52:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 121542 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 121542 ']' 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 121542 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121542 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121542' 00:06:37.021 killing process with pid 121542 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 121542 00:06:37.021 10:52:51 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 121542 00:06:37.589 [2024-07-11 10:52:51.797775] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.589 00:06:37.589 real 0m1.692s 00:06:37.589 user 0m2.139s 00:06:37.589 sys 0m0.321s 00:06:37.589 10:52:52 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.589 10:52:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.589 ************************************ 00:06:37.589 END TEST event_scheduler 00:06:37.589 ************************************ 00:06:37.849 10:52:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:37.849 10:52:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:37.849 10:52:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:37.849 10:52:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.849 10:52:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.849 10:52:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.849 ************************************ 00:06:37.849 START TEST app_repeat 00:06:37.849 ************************************ 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=121743 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 121743' 00:06:37.849 Process app_repeat pid: 121743 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:37.849 spdk_app_start Round 0 00:06:37.849 10:52:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 121743 /var/tmp/spdk-nbd.sock 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 121743 ']' 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.849 10:52:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.849 [2024-07-11 10:52:52.082286] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:37.849 [2024-07-11 10:52:52.082353] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121743 ] 00:06:37.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.849 [2024-07-11 10:52:52.138969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.849 [2024-07-11 10:52:52.218289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.849 [2024-07-11 10:52:52.218293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.108 10:52:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.108 10:52:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.108 10:52:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.366 Malloc0 00:06:38.366 10:52:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.625 Malloc1 00:06:38.625 10:52:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.625 10:52:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.883 /dev/nbd0 00:06:38.883 10:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.883 10:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.883 1+0 records in 00:06:38.883 1+0 records out 00:06:38.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148682 s, 27.5 MB/s 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.883 10:52:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.883 10:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.883 10:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.883 10:52:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.141 /dev/nbd1 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.141 1+0 records in 00:06:39.141 1+0 records out 00:06:39.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182241 s, 22.5 MB/s 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.141 10:52:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.141 10:52:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.398 { 00:06:39.398 "nbd_device": "/dev/nbd0", 00:06:39.398 "bdev_name": "Malloc0" 00:06:39.398 }, 00:06:39.398 { 00:06:39.398 "nbd_device": "/dev/nbd1", 00:06:39.398 "bdev_name": "Malloc1" 00:06:39.398 } 00:06:39.398 ]' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.398 { 00:06:39.398 "nbd_device": "/dev/nbd0", 00:06:39.398 "bdev_name": "Malloc0" 00:06:39.398 }, 00:06:39.398 { 00:06:39.398 "nbd_device": "/dev/nbd1", 00:06:39.398 "bdev_name": "Malloc1" 00:06:39.398 } 00:06:39.398 ]' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.398 /dev/nbd1' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.398 /dev/nbd1' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.398 256+0 records in 00:06:39.398 256+0 records out 00:06:39.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502554 s, 209 MB/s 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.398 256+0 records in 00:06:39.398 256+0 records out 00:06:39.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219671 s, 47.7 MB/s 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.398 256+0 records in 00:06:39.398 256+0 records out 00:06:39.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231829 s, 45.2 MB/s 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.398 10:52:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.656 10:52:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.914 10:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.172 10:52:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.172 10:52:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.738 10:52:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.738 [2024-07-11 10:52:55.072170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.738 [2024-07-11 10:52:55.153195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.738 [2024-07-11 10:52:55.153195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.996 [2024-07-11 10:52:55.204935] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.996 [2024-07-11 10:52:55.204998] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.522 10:52:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.522 10:52:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:43.522 spdk_app_start Round 1 00:06:43.522 10:52:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 121743 /var/tmp/spdk-nbd.sock 00:06:43.522 10:52:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 121743 ']' 00:06:43.522 10:52:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.522 10:52:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.522 10:52:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.522 10:52:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.522 10:52:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.780 10:52:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.780 10:52:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.780 10:52:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.039 Malloc0 00:06:44.039 10:52:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.298 Malloc1 00:06:44.298 10:52:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.298 10:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.556 /dev/nbd0 00:06:44.556 10:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.556 10:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.556 10:52:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.556 1+0 records in 00:06:44.556 1+0 records out 00:06:44.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186687 s, 21.9 MB/s 00:06:44.557 10:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.557 10:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.557 10:52:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.557 10:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.557 10:52:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.557 10:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.557 10:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.557 10:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.815 /dev/nbd1 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.815 1+0 records in 00:06:44.815 1+0 records out 00:06:44.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201264 s, 20.4 MB/s 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.815 10:52:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.815 10:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.100 { 00:06:45.100 "nbd_device": "/dev/nbd0", 00:06:45.100 "bdev_name": "Malloc0" 00:06:45.100 }, 00:06:45.100 { 00:06:45.100 "nbd_device": "/dev/nbd1", 00:06:45.100 "bdev_name": "Malloc1" 00:06:45.100 } 00:06:45.100 ]' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.100 { 00:06:45.100 "nbd_device": "/dev/nbd0", 00:06:45.100 "bdev_name": "Malloc0" 00:06:45.100 }, 00:06:45.100 { 00:06:45.100 "nbd_device": "/dev/nbd1", 00:06:45.100 "bdev_name": "Malloc1" 00:06:45.100 } 00:06:45.100 ]' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.100 /dev/nbd1' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.100 /dev/nbd1' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.100 256+0 records in 00:06:45.100 256+0 records out 00:06:45.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403753 s, 260 MB/s 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.100 256+0 records in 00:06:45.100 256+0 records out 00:06:45.100 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206364 s, 50.8 MB/s 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.100 10:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.358 256+0 records in 00:06:45.358 256+0 records out 00:06:45.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232798 s, 45.0 MB/s 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.358 10:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.629 10:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.955 10:53:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.956 10:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.239 10:53:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.239 10:53:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.535 10:53:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.535 [2024-07-11 10:53:00.866491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.811 [2024-07-11 10:53:00.953016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.811 [2024-07-11 10:53:00.953019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.811 [2024-07-11 10:53:01.013816] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.811 [2024-07-11 10:53:01.013882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.518 10:53:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.518 10:53:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:49.519 spdk_app_start Round 2 00:06:49.519 10:53:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 121743 /var/tmp/spdk-nbd.sock 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 121743 ']' 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.519 10:53:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:49.519 10:53:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.802 Malloc0 00:06:49.802 10:53:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.083 Malloc1 00:06:50.083 10:53:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.083 10:53:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.358 /dev/nbd0 00:06:50.358 10:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.358 10:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.358 1+0 records in 00:06:50.358 1+0 records out 00:06:50.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200951 s, 20.4 MB/s 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:50.358 10:53:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:50.358 10:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.358 10:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.358 10:53:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.627 /dev/nbd1 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.627 1+0 records in 00:06:50.627 1+0 records out 00:06:50.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215401 s, 19.0 MB/s 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:50.627 10:53:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.627 10:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.906 { 00:06:50.906 "nbd_device": "/dev/nbd0", 00:06:50.906 "bdev_name": "Malloc0" 00:06:50.906 }, 00:06:50.906 { 00:06:50.906 "nbd_device": "/dev/nbd1", 00:06:50.906 "bdev_name": "Malloc1" 00:06:50.906 } 00:06:50.906 ]' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.906 { 00:06:50.906 "nbd_device": "/dev/nbd0", 00:06:50.906 "bdev_name": "Malloc0" 00:06:50.906 }, 00:06:50.906 { 00:06:50.906 "nbd_device": "/dev/nbd1", 00:06:50.906 "bdev_name": "Malloc1" 00:06:50.906 } 00:06:50.906 ]' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.906 /dev/nbd1' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.906 /dev/nbd1' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.906 256+0 records in 00:06:50.906 256+0 records out 00:06:50.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476167 s, 220 MB/s 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.906 256+0 records in 00:06:50.906 256+0 records out 00:06:50.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215821 s, 48.6 MB/s 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.906 10:53:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.175 256+0 records in 00:06:51.175 256+0 records out 00:06:51.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245807 s, 42.7 MB/s 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.175 10:53:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.176 10:53:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.176 10:53:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.176 10:53:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.176 10:53:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.176 10:53:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.176 10:53:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.449 10:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.449 10:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.449 10:53:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.449 10:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.449 10:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.449 10:53:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.450 10:53:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.450 10:53:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.450 10:53:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.450 10:53:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.718 10:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.718 10:53:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.718 10:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.718 10:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.976 10:53:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.976 10:53:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.235 10:53:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.235 [2024-07-11 10:53:06.658176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.494 [2024-07-11 10:53:06.745935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.494 [2024-07-11 10:53:06.745938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.494 [2024-07-11 10:53:06.804143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.494 [2024-07-11 10:53:06.804214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.024 10:53:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 121743 /var/tmp/spdk-nbd.sock 00:06:55.024 10:53:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 121743 ']' 00:06:55.024 10:53:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.024 10:53:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.024 10:53:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.024 10:53:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.024 10:53:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:55.282 10:53:09 event.app_repeat -- event/event.sh@39 -- # killprocess 121743 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 121743 ']' 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 121743 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.282 10:53:09 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121743 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121743' 00:06:55.541 killing process with pid 121743 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@967 -- # kill 121743 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@972 -- # wait 121743 00:06:55.541 spdk_app_start is called in Round 0. 00:06:55.541 Shutdown signal received, stop current app iteration 00:06:55.541 Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 reinitialization... 00:06:55.541 spdk_app_start is called in Round 1. 00:06:55.541 Shutdown signal received, stop current app iteration 00:06:55.541 Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 reinitialization... 00:06:55.541 spdk_app_start is called in Round 2. 00:06:55.541 Shutdown signal received, stop current app iteration 00:06:55.541 Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 reinitialization... 00:06:55.541 spdk_app_start is called in Round 3. 00:06:55.541 Shutdown signal received, stop current app iteration 00:06:55.541 10:53:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:55.541 10:53:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:55.541 00:06:55.541 real 0m17.846s 00:06:55.541 user 0m38.937s 00:06:55.541 sys 0m3.222s 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.541 10:53:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.541 ************************************ 00:06:55.541 END TEST app_repeat 00:06:55.541 ************************************ 00:06:55.541 10:53:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:55.541 10:53:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:55.541 10:53:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:55.541 10:53:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.541 10:53:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.541 10:53:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.541 ************************************ 00:06:55.541 START TEST cpu_locks 00:06:55.541 ************************************ 00:06:55.541 10:53:09 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:55.800 * Looking for test storage... 00:06:55.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:55.800 10:53:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:55.800 10:53:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:55.800 10:53:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:55.800 10:53:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:55.800 10:53:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.800 10:53:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.800 10:53:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.800 ************************************ 00:06:55.800 START TEST default_locks 00:06:55.800 ************************************ 00:06:55.800 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:55.800 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=124126 00:06:55.800 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.800 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 124126 00:06:55.801 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 124126 ']' 00:06:55.801 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.801 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.801 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.801 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.801 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.801 [2024-07-11 10:53:10.082095] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:55.801 [2024-07-11 10:53:10.082192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124126 ] 00:06:55.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.801 [2024-07-11 10:53:10.141003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.059 [2024-07-11 10:53:10.228258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.059 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.059 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:56.059 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 124126 00:06:56.059 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 124126 00:06:56.059 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.626 lslocks: write error 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 124126 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 124126 ']' 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 124126 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124126 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124126' 00:06:56.626 killing process with pid 124126 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 124126 00:06:56.626 10:53:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 124126 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 124126 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 124126 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 124126 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 124126 ']' 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (124126) - No such process 00:06:56.885 ERROR: process (pid: 124126) is no longer running 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.885 00:06:56.885 real 0m1.232s 00:06:56.885 user 0m1.169s 00:06:56.885 sys 0m0.538s 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.885 10:53:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.885 ************************************ 00:06:56.885 END TEST default_locks 00:06:56.885 ************************************ 00:06:56.885 10:53:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.885 10:53:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:56.885 10:53:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.885 10:53:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.885 10:53:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.145 ************************************ 00:06:57.145 START TEST default_locks_via_rpc 00:06:57.145 ************************************ 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=124288 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 124288 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 124288 ']' 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.145 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.145 [2024-07-11 10:53:11.362288] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:57.145 [2024-07-11 10:53:11.362365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124288 ] 00:06:57.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.145 [2024-07-11 10:53:11.420597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.145 [2024-07-11 10:53:11.507155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 124288 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 124288 00:06:57.404 10:53:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 124288 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 124288 ']' 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 124288 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124288 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124288' 00:06:57.665 killing process with pid 124288 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 124288 00:06:57.665 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 124288 00:06:58.232 00:06:58.232 real 0m1.136s 00:06:58.232 user 0m1.088s 00:06:58.232 sys 0m0.496s 00:06:58.232 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.232 10:53:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.232 ************************************ 00:06:58.232 END TEST default_locks_via_rpc 00:06:58.232 ************************************ 00:06:58.232 10:53:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.232 10:53:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:58.232 10:53:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.232 10:53:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.232 10:53:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.232 ************************************ 00:06:58.232 START TEST non_locking_app_on_locked_coremask 00:06:58.232 ************************************ 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=124454 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 124454 /var/tmp/spdk.sock 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 124454 ']' 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.232 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.232 [2024-07-11 10:53:12.555978] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:58.232 [2024-07-11 10:53:12.556056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124454 ] 00:06:58.232 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.232 [2024-07-11 10:53:12.614643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.490 [2024-07-11 10:53:12.698743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=124574 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 124574 /var/tmp/spdk2.sock 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 124574 ']' 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.748 10:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.748 [2024-07-11 10:53:12.984781] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:06:58.748 [2024-07-11 10:53:12.984863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124574 ] 00:06:58.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.748 [2024-07-11 10:53:13.068214] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.748 [2024-07-11 10:53:13.068251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.006 [2024-07-11 10:53:13.235312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.573 10:53:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.573 10:53:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:59.573 10:53:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 124454 00:06:59.573 10:53:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124454 00:06:59.573 10:53:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.139 lslocks: write error 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 124454 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 124454 ']' 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 124454 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124454 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124454' 00:07:00.139 killing process with pid 124454 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 124454 00:07:00.139 10:53:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 124454 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 124574 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 124574 ']' 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 124574 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124574 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.074 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124574' 00:07:01.074 killing process with pid 124574 00:07:01.075 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 124574 00:07:01.075 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 124574 00:07:01.333 00:07:01.333 real 0m3.174s 00:07:01.333 user 0m3.359s 00:07:01.333 sys 0m1.038s 00:07:01.333 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.333 10:53:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.333 ************************************ 00:07:01.333 END TEST non_locking_app_on_locked_coremask 00:07:01.333 ************************************ 00:07:01.333 10:53:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:01.333 10:53:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:01.333 10:53:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.333 10:53:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.333 10:53:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.333 ************************************ 00:07:01.333 START TEST locking_app_on_unlocked_coremask 00:07:01.333 ************************************ 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=124883 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 124883 /var/tmp/spdk.sock 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 124883 ']' 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.333 10:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.591 [2024-07-11 10:53:15.779126] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:01.591 [2024-07-11 10:53:15.779224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124883 ] 00:07:01.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.591 [2024-07-11 10:53:15.837329] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.591 [2024-07-11 10:53:15.837366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.591 [2024-07-11 10:53:15.926211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125008 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125008 /var/tmp/spdk2.sock 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125008 ']' 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.848 10:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.848 [2024-07-11 10:53:16.213088] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:01.848 [2024-07-11 10:53:16.213177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125008 ] 00:07:01.848 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.105 [2024-07-11 10:53:16.295434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.105 [2024-07-11 10:53:16.463225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.039 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.039 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:03.039 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125008 00:07:03.039 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.039 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125008 00:07:03.604 lslocks: write error 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 124883 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 124883 ']' 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 124883 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124883 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124883' 00:07:03.604 killing process with pid 124883 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 124883 00:07:03.604 10:53:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 124883 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125008 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125008 ']' 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 125008 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125008 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125008' 00:07:04.538 killing process with pid 125008 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 125008 00:07:04.538 10:53:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 125008 00:07:04.798 00:07:04.798 real 0m3.292s 00:07:04.798 user 0m3.479s 00:07:04.798 sys 0m1.131s 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.798 ************************************ 00:07:04.798 END TEST locking_app_on_unlocked_coremask 00:07:04.798 ************************************ 00:07:04.798 10:53:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:04.798 10:53:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.798 10:53:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.798 10:53:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.798 10:53:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.798 ************************************ 00:07:04.798 START TEST locking_app_on_locked_coremask 00:07:04.798 ************************************ 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125321 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125321 /var/tmp/spdk.sock 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125321 ']' 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.798 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.799 [2024-07-11 10:53:19.119281] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:04.799 [2024-07-11 10:53:19.119371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125321 ] 00:07:04.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.799 [2024-07-11 10:53:19.177587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.057 [2024-07-11 10:53:19.267138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125442 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125442 /var/tmp/spdk2.sock 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125442 /var/tmp/spdk2.sock 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125442 /var/tmp/spdk2.sock 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 125442 ']' 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.314 10:53:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.314 [2024-07-11 10:53:19.550761] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:05.314 [2024-07-11 10:53:19.550844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125442 ] 00:07:05.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.315 [2024-07-11 10:53:19.638952] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125321 has claimed it. 00:07:05.315 [2024-07-11 10:53:19.639027] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (125442) - No such process 00:07:05.880 ERROR: process (pid: 125442) is no longer running 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125321 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125321 00:07:05.880 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.138 lslocks: write error 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125321 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 125321 ']' 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 125321 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125321 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125321' 00:07:06.138 killing process with pid 125321 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 125321 00:07:06.138 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 125321 00:07:06.706 00:07:06.706 real 0m1.835s 00:07:06.706 user 0m2.021s 00:07:06.706 sys 0m0.576s 00:07:06.706 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.706 10:53:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.706 ************************************ 00:07:06.706 END TEST locking_app_on_locked_coremask 00:07:06.706 ************************************ 00:07:06.706 10:53:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.706 10:53:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.706 10:53:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.706 10:53:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.706 10:53:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.706 ************************************ 00:07:06.706 START TEST locking_overlapped_coremask 00:07:06.706 ************************************ 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125604 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125604 /var/tmp/spdk.sock 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 125604 ']' 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.706 10:53:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.706 [2024-07-11 10:53:20.995703] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:06.706 [2024-07-11 10:53:20.995828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125604 ] 00:07:06.706 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.706 [2024-07-11 10:53:21.053868] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.965 [2024-07-11 10:53:21.142289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.965 [2024-07-11 10:53:21.142347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.965 [2024-07-11 10:53:21.142349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=125625 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 125625 /var/tmp/spdk2.sock 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125625 /var/tmp/spdk2.sock 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125625 /var/tmp/spdk2.sock 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 125625 ']' 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.223 10:53:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.223 [2024-07-11 10:53:21.444821] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:07.223 [2024-07-11 10:53:21.444908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125625 ] 00:07:07.223 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.223 [2024-07-11 10:53:21.533910] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125604 has claimed it. 00:07:07.223 [2024-07-11 10:53:21.533966] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (125625) - No such process 00:07:07.789 ERROR: process (pid: 125625) is no longer running 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125604 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 125604 ']' 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 125604 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125604 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125604' 00:07:07.789 killing process with pid 125604 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 125604 00:07:07.789 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 125604 00:07:08.357 00:07:08.357 real 0m1.607s 00:07:08.357 user 0m4.374s 00:07:08.357 sys 0m0.450s 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.357 ************************************ 00:07:08.357 END TEST locking_overlapped_coremask 00:07:08.357 ************************************ 00:07:08.357 10:53:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.357 10:53:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:08.357 10:53:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.357 10:53:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.357 10:53:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.357 ************************************ 00:07:08.357 START TEST locking_overlapped_coremask_via_rpc 00:07:08.357 ************************************ 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=125789 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 125789 /var/tmp/spdk.sock 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125789 ']' 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.357 10:53:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.357 [2024-07-11 10:53:22.657256] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:08.357 [2024-07-11 10:53:22.657352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125789 ] 00:07:08.357 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.357 [2024-07-11 10:53:22.717915] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.357 [2024-07-11 10:53:22.717954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.616 [2024-07-11 10:53:22.808286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.616 [2024-07-11 10:53:22.808342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.616 [2024-07-11 10:53:22.808345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.874 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.874 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.874 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=125916 00:07:08.874 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 125916 /var/tmp/spdk2.sock 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125916 ']' 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.875 10:53:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.875 [2024-07-11 10:53:23.107699] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:08.875 [2024-07-11 10:53:23.107798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125916 ] 00:07:08.875 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.875 [2024-07-11 10:53:23.194412] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.875 [2024-07-11 10:53:23.194445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.133 [2024-07-11 10:53:23.370471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.133 [2024-07-11 10:53:23.370536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:09.133 [2024-07-11 10:53:23.370538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.697 [2024-07-11 10:53:24.067854] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125789 has claimed it. 00:07:09.697 request: 00:07:09.697 { 00:07:09.697 "method": "framework_enable_cpumask_locks", 00:07:09.697 "req_id": 1 00:07:09.697 } 00:07:09.697 Got JSON-RPC error response 00:07:09.697 response: 00:07:09.697 { 00:07:09.697 "code": -32603, 00:07:09.697 "message": "Failed to claim CPU core: 2" 00:07:09.697 } 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 125789 /var/tmp/spdk.sock 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125789 ']' 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.697 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 125916 /var/tmp/spdk2.sock 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 125916 ']' 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.955 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.213 00:07:10.213 real 0m1.964s 00:07:10.213 user 0m1.029s 00:07:10.213 sys 0m0.171s 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.213 10:53:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.213 ************************************ 00:07:10.213 END TEST locking_overlapped_coremask_via_rpc 00:07:10.213 ************************************ 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:10.213 10:53:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.213 10:53:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125789 ]] 00:07:10.213 10:53:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125789 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125789 ']' 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125789 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125789 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125789' 00:07:10.213 killing process with pid 125789 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 125789 00:07:10.213 10:53:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 125789 00:07:10.778 10:53:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125916 ]] 00:07:10.778 10:53:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125916 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125916 ']' 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125916 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125916 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125916' 00:07:10.778 killing process with pid 125916 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 125916 00:07:10.778 10:53:25 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 125916 00:07:11.037 10:53:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.037 10:53:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.037 10:53:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125789 ]] 00:07:11.037 10:53:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125789 00:07:11.037 10:53:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125789 ']' 00:07:11.037 10:53:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125789 00:07:11.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (125789) - No such process 00:07:11.037 10:53:25 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 125789 is not found' 00:07:11.037 Process with pid 125789 is not found 00:07:11.037 10:53:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125916 ]] 00:07:11.037 10:53:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125916 00:07:11.037 10:53:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 125916 ']' 00:07:11.037 10:53:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 125916 00:07:11.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (125916) - No such process 00:07:11.296 10:53:25 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 125916 is not found' 00:07:11.296 Process with pid 125916 is not found 00:07:11.296 10:53:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.296 00:07:11.296 real 0m15.513s 00:07:11.296 user 0m27.281s 00:07:11.296 sys 0m5.321s 00:07:11.296 10:53:25 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.296 10:53:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.296 ************************************ 00:07:11.296 END TEST cpu_locks 00:07:11.296 ************************************ 00:07:11.296 10:53:25 event -- common/autotest_common.sh@1142 -- # return 0 00:07:11.296 00:07:11.296 real 0m39.089s 00:07:11.296 user 1m14.941s 00:07:11.296 sys 0m9.327s 00:07:11.296 10:53:25 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.296 10:53:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.296 ************************************ 00:07:11.296 END TEST event 00:07:11.296 ************************************ 00:07:11.296 10:53:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.296 10:53:25 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:11.296 10:53:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.296 10:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.296 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.296 ************************************ 00:07:11.296 START TEST thread 00:07:11.296 ************************************ 00:07:11.296 10:53:25 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:11.296 * Looking for test storage... 00:07:11.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:11.296 10:53:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.296 10:53:25 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:11.296 10:53:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.296 10:53:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.296 ************************************ 00:07:11.296 START TEST thread_poller_perf 00:07:11.296 ************************************ 00:07:11.296 10:53:25 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.296 [2024-07-11 10:53:25.613362] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:11.296 [2024-07-11 10:53:25.613430] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126283 ] 00:07:11.296 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.296 [2024-07-11 10:53:25.667972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.554 [2024-07-11 10:53:25.748568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.554 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.489 ====================================== 00:07:12.489 busy:2710087347 (cyc) 00:07:12.489 total_run_count: 364000 00:07:12.489 tsc_hz: 2700000000 (cyc) 00:07:12.489 ====================================== 00:07:12.489 poller_cost: 7445 (cyc), 2757 (nsec) 00:07:12.489 00:07:12.489 real 0m1.229s 00:07:12.489 user 0m1.150s 00:07:12.489 sys 0m0.074s 00:07:12.489 10:53:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.489 10:53:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.489 ************************************ 00:07:12.489 END TEST thread_poller_perf 00:07:12.489 ************************************ 00:07:12.489 10:53:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:12.489 10:53:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.489 10:53:26 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:12.489 10:53:26 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.489 10:53:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.489 ************************************ 00:07:12.489 START TEST thread_poller_perf 00:07:12.489 ************************************ 00:07:12.489 10:53:26 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.489 [2024-07-11 10:53:26.893797] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:12.489 [2024-07-11 10:53:26.893865] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126436 ] 00:07:12.746 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.746 [2024-07-11 10:53:26.949998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.746 [2024-07-11 10:53:27.033604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.746 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:14.120 ====================================== 00:07:14.120 busy:2702227389 (cyc) 00:07:14.120 total_run_count: 4622000 00:07:14.120 tsc_hz: 2700000000 (cyc) 00:07:14.120 ====================================== 00:07:14.120 poller_cost: 584 (cyc), 216 (nsec) 00:07:14.120 00:07:14.120 real 0m1.231s 00:07:14.120 user 0m1.152s 00:07:14.120 sys 0m0.075s 00:07:14.120 10:53:28 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.120 10:53:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.120 ************************************ 00:07:14.120 END TEST thread_poller_perf 00:07:14.120 ************************************ 00:07:14.120 10:53:28 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:14.120 10:53:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:14.120 00:07:14.120 real 0m2.603s 00:07:14.120 user 0m2.352s 00:07:14.120 sys 0m0.251s 00:07:14.120 10:53:28 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.120 10:53:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.120 ************************************ 00:07:14.120 END TEST thread 00:07:14.120 ************************************ 00:07:14.120 10:53:28 -- common/autotest_common.sh@1142 -- # return 0 00:07:14.120 10:53:28 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:14.120 10:53:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.120 10:53:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.120 10:53:28 -- common/autotest_common.sh@10 -- # set +x 00:07:14.120 ************************************ 00:07:14.120 START TEST accel 00:07:14.120 ************************************ 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:14.120 * Looking for test storage... 00:07:14.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:14.120 10:53:28 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:14.120 10:53:28 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:14.120 10:53:28 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:14.120 10:53:28 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=126637 00:07:14.120 10:53:28 accel -- accel/accel.sh@63 -- # waitforlisten 126637 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@829 -- # '[' -z 126637 ']' 00:07:14.120 10:53:28 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.120 10:53:28 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.120 10:53:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.120 10:53:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.120 10:53:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.120 10:53:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.120 10:53:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.120 10:53:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.120 10:53:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:14.120 10:53:28 accel -- accel/accel.sh@41 -- # jq -r . 00:07:14.120 [2024-07-11 10:53:28.282088] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:14.120 [2024-07-11 10:53:28.282210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126637 ] 00:07:14.120 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.120 [2024-07-11 10:53:28.340497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.120 [2024-07-11 10:53:28.428019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@862 -- # return 0 00:07:14.379 10:53:28 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:14.379 10:53:28 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:14.379 10:53:28 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:14.379 10:53:28 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:14.379 10:53:28 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:14.379 10:53:28 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:14.379 10:53:28 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:14.379 10:53:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:14.379 10:53:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:14.379 10:53:28 accel -- accel/accel.sh@75 -- # killprocess 126637 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@948 -- # '[' -z 126637 ']' 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@952 -- # kill -0 126637 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@953 -- # uname 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126637 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126637' 00:07:14.379 killing process with pid 126637 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@967 -- # kill 126637 00:07:14.379 10:53:28 accel -- common/autotest_common.sh@972 -- # wait 126637 00:07:14.946 10:53:29 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:14.946 10:53:29 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.946 10:53:29 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:14.946 10:53:29 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:14.946 10:53:29 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.946 10:53:29 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.946 10:53:29 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.946 10:53:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.946 ************************************ 00:07:14.946 START TEST accel_missing_filename 00:07:14.946 ************************************ 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.946 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:14.946 10:53:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:14.946 [2024-07-11 10:53:29.234128] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:14.946 [2024-07-11 10:53:29.234192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126803 ] 00:07:14.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.946 [2024-07-11 10:53:29.291238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.205 [2024-07-11 10:53:29.375694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.205 [2024-07-11 10:53:29.430388] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.205 [2024-07-11 10:53:29.506660] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:15.205 A filename is required. 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.205 00:07:15.205 real 0m0.370s 00:07:15.205 user 0m0.263s 00:07:15.205 sys 0m0.143s 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.205 10:53:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:15.205 ************************************ 00:07:15.205 END TEST accel_missing_filename 00:07:15.205 ************************************ 00:07:15.205 10:53:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.205 10:53:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:15.205 10:53:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:15.205 10:53:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.205 10:53:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.464 ************************************ 00:07:15.464 START TEST accel_compress_verify 00:07:15.464 ************************************ 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.464 10:53:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:15.464 10:53:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:15.464 10:53:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:15.464 10:53:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.465 10:53:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.465 10:53:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.465 10:53:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.465 10:53:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.465 10:53:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:15.465 10:53:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:15.465 [2024-07-11 10:53:29.649639] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:15.465 [2024-07-11 10:53:29.649702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126944 ] 00:07:15.465 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.465 [2024-07-11 10:53:29.706232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.465 [2024-07-11 10:53:29.792012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.465 [2024-07-11 10:53:29.847806] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.724 [2024-07-11 10:53:29.930253] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:15.724 00:07:15.724 Compression does not support the verify option, aborting. 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.724 00:07:15.724 real 0m0.375s 00:07:15.724 user 0m0.275s 00:07:15.724 sys 0m0.136s 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.724 10:53:30 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.724 ************************************ 00:07:15.724 END TEST accel_compress_verify 00:07:15.724 ************************************ 00:07:15.724 10:53:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.724 10:53:30 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:15.724 10:53:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.724 10:53:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.724 10:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.724 ************************************ 00:07:15.724 START TEST accel_wrong_workload 00:07:15.724 ************************************ 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:15.724 10:53:30 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:15.724 Unsupported workload type: foobar 00:07:15.724 [2024-07-11 10:53:30.074370] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:15.724 accel_perf options: 00:07:15.724 [-h help message] 00:07:15.724 [-q queue depth per core] 00:07:15.724 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.724 [-T number of threads per core 00:07:15.724 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.724 [-t time in seconds] 00:07:15.724 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.724 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:15.724 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.724 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.724 [-S for crc32c workload, use this seed value (default 0) 00:07:15.724 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.724 [-f for fill workload, use this BYTE value (default 255) 00:07:15.724 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.724 [-y verify result if this switch is on] 00:07:15.724 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.724 Can be used to spread operations across a wider range of memory. 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.724 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.724 00:07:15.724 real 0m0.023s 00:07:15.724 user 0m0.010s 00:07:15.724 sys 0m0.013s 00:07:15.725 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.725 10:53:30 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:15.725 ************************************ 00:07:15.725 END TEST accel_wrong_workload 00:07:15.725 ************************************ 00:07:15.725 Error: writing output failed: Broken pipe 00:07:15.725 10:53:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.725 10:53:30 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.725 10:53:30 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:15.725 10:53:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.725 10:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.725 ************************************ 00:07:15.725 START TEST accel_negative_buffers 00:07:15.725 ************************************ 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:15.725 10:53:30 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:15.725 -x option must be non-negative. 00:07:15.725 [2024-07-11 10:53:30.141142] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:15.725 accel_perf options: 00:07:15.725 [-h help message] 00:07:15.725 [-q queue depth per core] 00:07:15.725 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.725 [-T number of threads per core 00:07:15.725 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.725 [-t time in seconds] 00:07:15.725 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.725 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:15.725 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.725 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.725 [-S for crc32c workload, use this seed value (default 0) 00:07:15.725 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.725 [-f for fill workload, use this BYTE value (default 255) 00:07:15.725 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.725 [-y verify result if this switch is on] 00:07:15.725 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.725 Can be used to spread operations across a wider range of memory. 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.725 00:07:15.725 real 0m0.022s 00:07:15.725 user 0m0.010s 00:07:15.725 sys 0m0.012s 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.725 10:53:30 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:15.725 ************************************ 00:07:15.725 END TEST accel_negative_buffers 00:07:15.725 ************************************ 00:07:15.984 10:53:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.984 10:53:30 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:15.984 10:53:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:15.984 10:53:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.984 10:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.984 Error: writing output failed: Broken pipe 00:07:15.984 ************************************ 00:07:15.984 START TEST accel_crc32c 00:07:15.984 ************************************ 00:07:15.984 10:53:30 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:15.984 [2024-07-11 10:53:30.203340] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:15.984 [2024-07-11 10:53:30.203403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127015 ] 00:07:15.984 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.984 [2024-07-11 10:53:30.261696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.984 [2024-07-11 10:53:30.348645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.984 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.243 10:53:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:17.177 10:53:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.177 00:07:17.177 real 0m1.383s 00:07:17.177 user 0m1.245s 00:07:17.177 sys 0m0.140s 00:07:17.177 10:53:31 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.177 10:53:31 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:17.177 ************************************ 00:07:17.177 END TEST accel_crc32c 00:07:17.177 ************************************ 00:07:17.177 10:53:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.177 10:53:31 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:17.177 10:53:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:17.177 10:53:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.177 10:53:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.436 ************************************ 00:07:17.436 START TEST accel_crc32c_C2 00:07:17.436 ************************************ 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:17.436 [2024-07-11 10:53:31.635375] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:17.436 [2024-07-11 10:53:31.635440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127167 ] 00:07:17.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.436 [2024-07-11 10:53:31.694970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.436 [2024-07-11 10:53:31.787637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.436 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.437 10:53:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:18.813 10:53:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.813 00:07:18.813 real 0m1.384s 00:07:18.813 user 0m1.241s 00:07:18.813 sys 0m0.146s 00:07:18.813 10:53:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.813 10:53:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:18.813 ************************************ 00:07:18.813 END TEST accel_crc32c_C2 00:07:18.813 ************************************ 00:07:18.813 10:53:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.813 10:53:33 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:18.813 10:53:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.813 10:53:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.813 10:53:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.813 ************************************ 00:07:18.813 START TEST accel_copy 00:07:18.813 ************************************ 00:07:18.813 10:53:33 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:18.813 10:53:33 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:18.813 [2024-07-11 10:53:33.066447] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:18.813 [2024-07-11 10:53:33.066508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127440 ] 00:07:18.813 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.813 [2024-07-11 10:53:33.122369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.813 [2024-07-11 10:53:33.206580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.072 10:53:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:20.006 10:53:34 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.006 00:07:20.006 real 0m1.378s 00:07:20.006 user 0m1.238s 00:07:20.006 sys 0m0.141s 00:07:20.006 10:53:34 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.006 10:53:34 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.006 ************************************ 00:07:20.006 END TEST accel_copy 00:07:20.006 ************************************ 00:07:20.265 10:53:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.265 10:53:34 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.265 10:53:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:20.265 10:53:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.265 10:53:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.265 ************************************ 00:07:20.265 START TEST accel_fill 00:07:20.265 ************************************ 00:07:20.265 10:53:34 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:20.265 [2024-07-11 10:53:34.494670] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:20.265 [2024-07-11 10:53:34.494740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127597 ] 00:07:20.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.265 [2024-07-11 10:53:34.552637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.265 [2024-07-11 10:53:34.636849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.265 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.266 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.524 10:53:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:21.458 10:53:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.458 00:07:21.458 real 0m1.378s 00:07:21.458 user 0m1.253s 00:07:21.458 sys 0m0.127s 00:07:21.458 10:53:35 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.458 10:53:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:21.458 ************************************ 00:07:21.458 END TEST accel_fill 00:07:21.458 ************************************ 00:07:21.458 10:53:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.458 10:53:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:21.458 10:53:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.458 10:53:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.458 10:53:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.717 ************************************ 00:07:21.717 START TEST accel_copy_crc32c 00:07:21.717 ************************************ 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:21.717 10:53:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:21.717 [2024-07-11 10:53:35.921543] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:21.717 [2024-07-11 10:53:35.921608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127752 ] 00:07:21.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.717 [2024-07-11 10:53:35.976665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.717 [2024-07-11 10:53:36.061370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.717 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.718 10:53:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:23.118 10:53:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.118 00:07:23.118 real 0m1.378s 00:07:23.118 user 0m1.249s 00:07:23.118 sys 0m0.132s 00:07:23.119 10:53:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.119 10:53:37 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:23.119 ************************************ 00:07:23.119 END TEST accel_copy_crc32c 00:07:23.119 ************************************ 00:07:23.119 10:53:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.119 10:53:37 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:23.119 10:53:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:23.119 10:53:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.119 10:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.119 ************************************ 00:07:23.119 START TEST accel_copy_crc32c_C2 00:07:23.119 ************************************ 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.119 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:23.119 [2024-07-11 10:53:37.351994] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:23.119 [2024-07-11 10:53:37.352069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127946 ] 00:07:23.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.119 [2024-07-11 10:53:37.411451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.119 [2024-07-11 10:53:37.495759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.376 10:53:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.306 00:07:24.306 real 0m1.375s 00:07:24.306 user 0m1.235s 00:07:24.306 sys 0m0.142s 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.306 10:53:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:24.306 ************************************ 00:07:24.306 END TEST accel_copy_crc32c_C2 00:07:24.306 ************************************ 00:07:24.563 10:53:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.563 10:53:38 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:24.563 10:53:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.563 10:53:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.563 10:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 ************************************ 00:07:24.563 START TEST accel_dualcast 00:07:24.563 ************************************ 00:07:24.563 10:53:38 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.563 10:53:38 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:24.564 [2024-07-11 10:53:38.779272] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:24.564 [2024-07-11 10:53:38.779337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128183 ] 00:07:24.564 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.564 [2024-07-11 10:53:38.837860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.564 [2024-07-11 10:53:38.920303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.564 10:53:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:25.935 10:53:40 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.935 00:07:25.935 real 0m1.381s 00:07:25.935 user 0m1.255s 00:07:25.935 sys 0m0.128s 00:07:25.935 10:53:40 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.935 10:53:40 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:25.935 ************************************ 00:07:25.935 END TEST accel_dualcast 00:07:25.935 ************************************ 00:07:25.935 10:53:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.935 10:53:40 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:25.935 10:53:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.935 10:53:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.935 10:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.935 ************************************ 00:07:25.935 START TEST accel_compare 00:07:25.935 ************************************ 00:07:25.935 10:53:40 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:25.935 10:53:40 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:25.935 [2024-07-11 10:53:40.205433] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:25.935 [2024-07-11 10:53:40.205497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128336 ] 00:07:25.935 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.935 [2024-07-11 10:53:40.263199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.935 [2024-07-11 10:53:40.347214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.194 10:53:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:27.569 10:53:41 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.569 00:07:27.569 real 0m1.380s 00:07:27.569 user 0m1.245s 00:07:27.569 sys 0m0.137s 00:07:27.569 10:53:41 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.569 10:53:41 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:27.569 ************************************ 00:07:27.569 END TEST accel_compare 00:07:27.569 ************************************ 00:07:27.569 10:53:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.569 10:53:41 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:27.569 10:53:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.569 10:53:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.569 10:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.569 ************************************ 00:07:27.569 START TEST accel_xor 00:07:27.569 ************************************ 00:07:27.569 10:53:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:27.569 [2024-07-11 10:53:41.635149] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:27.569 [2024-07-11 10:53:41.635213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128497 ] 00:07:27.569 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.569 [2024-07-11 10:53:41.693557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.569 [2024-07-11 10:53:41.776048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.569 10:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:28.946 10:53:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.946 00:07:28.946 real 0m1.381s 00:07:28.946 user 0m1.251s 00:07:28.946 sys 0m0.132s 00:07:28.946 10:53:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.946 10:53:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:28.946 ************************************ 00:07:28.946 END TEST accel_xor 00:07:28.946 ************************************ 00:07:28.946 10:53:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.946 10:53:43 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:28.946 10:53:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:28.946 10:53:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.946 10:53:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.946 ************************************ 00:07:28.946 START TEST accel_xor 00:07:28.946 ************************************ 00:07:28.946 10:53:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:28.946 [2024-07-11 10:53:43.058511] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:28.946 [2024-07-11 10:53:43.058575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128722 ] 00:07:28.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.946 [2024-07-11 10:53:43.116432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.946 [2024-07-11 10:53:43.199429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.946 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.947 10:53:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.321 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:30.322 10:53:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.322 00:07:30.322 real 0m1.365s 00:07:30.322 user 0m1.228s 00:07:30.322 sys 0m0.139s 00:07:30.322 10:53:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.322 10:53:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:30.322 ************************************ 00:07:30.322 END TEST accel_xor 00:07:30.322 ************************************ 00:07:30.322 10:53:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.322 10:53:44 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:30.322 10:53:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:30.322 10:53:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.322 10:53:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.322 ************************************ 00:07:30.322 START TEST accel_dif_verify 00:07:30.322 ************************************ 00:07:30.322 10:53:44 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:30.322 [2024-07-11 10:53:44.465915] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:30.322 [2024-07-11 10:53:44.465985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128928 ] 00:07:30.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.322 [2024-07-11 10:53:44.522324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.322 [2024-07-11 10:53:44.605324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:30.322 10:53:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:31.698 10:53:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.698 00:07:31.698 real 0m1.378s 00:07:31.698 user 0m1.245s 00:07:31.698 sys 0m0.136s 00:07:31.698 10:53:45 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.698 10:53:45 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:31.698 ************************************ 00:07:31.698 END TEST accel_dif_verify 00:07:31.698 ************************************ 00:07:31.698 10:53:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.698 10:53:45 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:31.698 10:53:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:31.698 10:53:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.698 10:53:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.698 ************************************ 00:07:31.698 START TEST accel_dif_generate 00:07:31.698 ************************************ 00:07:31.698 10:53:45 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:31.698 10:53:45 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:31.698 [2024-07-11 10:53:45.889902] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:31.698 [2024-07-11 10:53:45.889964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129081 ] 00:07:31.698 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.698 [2024-07-11 10:53:45.945911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.698 [2024-07-11 10:53:46.029970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.698 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.699 10:53:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:33.073 10:53:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.073 00:07:33.073 real 0m1.378s 00:07:33.073 user 0m1.250s 00:07:33.073 sys 0m0.131s 00:07:33.073 10:53:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.073 10:53:47 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:33.073 ************************************ 00:07:33.073 END TEST accel_dif_generate 00:07:33.073 ************************************ 00:07:33.073 10:53:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.073 10:53:47 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:33.073 10:53:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:33.073 10:53:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.073 10:53:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.073 ************************************ 00:07:33.073 START TEST accel_dif_generate_copy 00:07:33.073 ************************************ 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:33.073 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:33.073 [2024-07-11 10:53:47.313734] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:33.073 [2024-07-11 10:53:47.313832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129233 ] 00:07:33.073 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.073 [2024-07-11 10:53:47.371211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.073 [2024-07-11 10:53:47.462845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.332 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:33.333 10:53:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.268 00:07:34.268 real 0m1.375s 00:07:34.268 user 0m1.240s 00:07:34.268 sys 0m0.137s 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.268 10:53:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.268 ************************************ 00:07:34.268 END TEST accel_dif_generate_copy 00:07:34.268 ************************************ 00:07:34.527 10:53:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.527 10:53:48 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:34.527 10:53:48 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.527 10:53:48 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:34.527 10:53:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.527 10:53:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.527 ************************************ 00:07:34.527 START TEST accel_comp 00:07:34.527 ************************************ 00:07:34.527 10:53:48 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:34.527 [2024-07-11 10:53:48.741003] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:34.527 [2024-07-11 10:53:48.741079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129450 ] 00:07:34.527 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.527 [2024-07-11 10:53:48.799938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.527 [2024-07-11 10:53:48.884905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.527 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.528 10:53:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.903 10:53:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:35.904 10:53:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.904 00:07:35.904 real 0m1.380s 00:07:35.904 user 0m1.248s 00:07:35.904 sys 0m0.135s 00:07:35.904 10:53:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.904 10:53:50 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:35.904 ************************************ 00:07:35.904 END TEST accel_comp 00:07:35.904 ************************************ 00:07:35.904 10:53:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.904 10:53:50 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.904 10:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.904 10:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.904 10:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.904 ************************************ 00:07:35.904 START TEST accel_decomp 00:07:35.904 ************************************ 00:07:35.904 10:53:50 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:35.904 10:53:50 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:35.904 [2024-07-11 10:53:50.162954] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:35.904 [2024-07-11 10:53:50.163012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129668 ] 00:07:35.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.904 [2024-07-11 10:53:50.218591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.904 [2024-07-11 10:53:50.301906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.163 10:53:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.098 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.357 10:53:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.357 00:07:37.357 real 0m1.380s 00:07:37.357 user 0m1.249s 00:07:37.357 sys 0m0.133s 00:07:37.357 10:53:51 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.357 10:53:51 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:37.357 ************************************ 00:07:37.357 END TEST accel_decomp 00:07:37.357 ************************************ 00:07:37.357 10:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.357 10:53:51 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.357 10:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:37.357 10:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.357 10:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.357 ************************************ 00:07:37.357 START TEST accel_decomp_full 00:07:37.357 ************************************ 00:07:37.357 10:53:51 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:37.357 10:53:51 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:37.357 [2024-07-11 10:53:51.594440] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:37.357 [2024-07-11 10:53:51.594504] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129827 ] 00:07:37.357 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.357 [2024-07-11 10:53:51.651510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.357 [2024-07-11 10:53:51.734986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.616 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.617 10:53:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.552 10:53:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.552 00:07:38.552 real 0m1.394s 00:07:38.552 user 0m1.257s 00:07:38.552 sys 0m0.140s 00:07:38.552 10:53:52 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.552 10:53:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:38.552 ************************************ 00:07:38.552 END TEST accel_decomp_full 00:07:38.552 ************************************ 00:07:38.811 10:53:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.811 10:53:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.811 10:53:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:38.811 10:53:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.811 10:53:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.811 ************************************ 00:07:38.811 START TEST accel_decomp_mcore 00:07:38.811 ************************************ 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:38.811 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:38.811 [2024-07-11 10:53:53.039092] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:38.811 [2024-07-11 10:53:53.039154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129979 ] 00:07:38.811 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.811 [2024-07-11 10:53:53.096917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.811 [2024-07-11 10:53:53.182177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.811 [2024-07-11 10:53:53.182239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.811 [2024-07-11 10:53:53.182302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.811 [2024-07-11 10:53:53.182305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.070 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:39.071 10:53:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.004 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.005 00:07:40.005 real 0m1.376s 00:07:40.005 user 0m4.642s 00:07:40.005 sys 0m0.142s 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.005 10:53:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:40.005 ************************************ 00:07:40.005 END TEST accel_decomp_mcore 00:07:40.005 ************************************ 00:07:40.005 10:53:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.005 10:53:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.005 10:53:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:40.005 10:53:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.005 10:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.264 ************************************ 00:07:40.264 START TEST accel_decomp_full_mcore 00:07:40.264 ************************************ 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.264 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:40.265 [2024-07-11 10:53:54.459313] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:40.265 [2024-07-11 10:53:54.459375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130250 ] 00:07:40.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.265 [2024-07-11 10:53:54.516572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.265 [2024-07-11 10:53:54.611710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.265 [2024-07-11 10:53:54.611784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.265 [2024-07-11 10:53:54.611811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.265 [2024-07-11 10:53:54.611813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.265 10:53:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.641 00:07:41.641 real 0m1.413s 00:07:41.641 user 0m4.741s 00:07:41.641 sys 0m0.151s 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.641 10:53:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:41.641 ************************************ 00:07:41.641 END TEST accel_decomp_full_mcore 00:07:41.641 ************************************ 00:07:41.641 10:53:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.641 10:53:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.641 10:53:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:41.641 10:53:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.641 10:53:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.641 ************************************ 00:07:41.641 START TEST accel_decomp_mthread 00:07:41.641 ************************************ 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:41.641 10:53:55 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:41.641 [2024-07-11 10:53:55.915748] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:41.641 [2024-07-11 10:53:55.915823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130420 ] 00:07:41.641 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.641 [2024-07-11 10:53:55.973150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.641 [2024-07-11 10:53:56.056652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.900 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.901 10:53:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.275 00:07:43.275 real 0m1.386s 00:07:43.275 user 0m1.243s 00:07:43.275 sys 0m0.146s 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.275 10:53:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 ************************************ 00:07:43.275 END TEST accel_decomp_mthread 00:07:43.275 ************************************ 00:07:43.275 10:53:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.275 10:53:57 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.275 10:53:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:43.275 10:53:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.275 10:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.275 ************************************ 00:07:43.275 START TEST accel_decomp_full_mthread 00:07:43.275 ************************************ 00:07:43.275 10:53:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.275 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:43.275 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:43.275 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.275 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.275 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:43.276 [2024-07-11 10:53:57.350433] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:43.276 [2024-07-11 10:53:57.350498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130573 ] 00:07:43.276 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.276 [2024-07-11 10:53:57.405273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.276 [2024-07-11 10:53:57.489903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.276 10:53:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.678 00:07:44.678 real 0m1.403s 00:07:44.678 user 0m1.272s 00:07:44.678 sys 0m0.133s 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.678 10:53:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:44.678 ************************************ 00:07:44.678 END TEST accel_decomp_full_mthread 00:07:44.678 ************************************ 00:07:44.678 10:53:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.678 10:53:58 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:44.678 10:53:58 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:44.678 10:53:58 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:44.678 10:53:58 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:44.678 10:53:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.678 10:53:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.678 10:53:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.678 10:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.678 10:53:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.678 10:53:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.678 10:53:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.678 10:53:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:44.678 10:53:58 accel -- accel/accel.sh@41 -- # jq -r . 00:07:44.678 ************************************ 00:07:44.678 START TEST accel_dif_functional_tests 00:07:44.678 ************************************ 00:07:44.678 10:53:58 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:44.678 [2024-07-11 10:53:58.825153] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:44.678 [2024-07-11 10:53:58.825213] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130731 ] 00:07:44.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.678 [2024-07-11 10:53:58.886144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.678 [2024-07-11 10:53:58.980254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.678 [2024-07-11 10:53:58.983772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.679 [2024-07-11 10:53:58.983835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.679 00:07:44.679 00:07:44.679 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.679 http://cunit.sourceforge.net/ 00:07:44.679 00:07:44.679 00:07:44.679 Suite: accel_dif 00:07:44.679 Test: verify: DIF generated, GUARD check ...passed 00:07:44.679 Test: verify: DIF generated, APPTAG check ...passed 00:07:44.679 Test: verify: DIF generated, REFTAG check ...passed 00:07:44.679 Test: verify: DIF not generated, GUARD check ...[2024-07-11 10:53:59.078152] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:44.679 passed 00:07:44.679 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 10:53:59.078239] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:44.679 passed 00:07:44.679 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 10:53:59.078272] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:44.679 passed 00:07:44.679 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:44.679 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 10:53:59.078331] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:44.679 passed 00:07:44.679 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:44.679 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:44.679 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:44.679 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 10:53:59.078484] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:44.679 passed 00:07:44.679 Test: verify copy: DIF generated, GUARD check ...passed 00:07:44.679 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:44.679 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:44.679 Test: verify copy: DIF not generated, GUARD check ...[2024-07-11 10:53:59.078637] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:44.679 passed 00:07:44.679 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-11 10:53:59.078673] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:44.679 passed 00:07:44.679 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-11 10:53:59.078705] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:44.679 passed 00:07:44.679 Test: generate copy: DIF generated, GUARD check ...passed 00:07:44.679 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:44.679 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:44.679 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:44.679 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:44.679 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:44.679 Test: generate copy: iovecs-len validate ...[2024-07-11 10:53:59.078962] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:44.679 passed 00:07:44.679 Test: generate copy: buffer alignment validate ...passed 00:07:44.679 00:07:44.679 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.679 suites 1 1 n/a 0 0 00:07:44.679 tests 26 26 26 0 0 00:07:44.679 asserts 115 115 115 0 n/a 00:07:44.679 00:07:44.679 Elapsed time = 0.003 seconds 00:07:44.939 00:07:44.939 real 0m0.492s 00:07:44.939 user 0m0.766s 00:07:44.939 sys 0m0.177s 00:07:44.939 10:53:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.939 10:53:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:44.939 ************************************ 00:07:44.939 END TEST accel_dif_functional_tests 00:07:44.939 ************************************ 00:07:44.939 10:53:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.939 00:07:44.939 real 0m31.123s 00:07:44.939 user 0m34.669s 00:07:44.939 sys 0m4.413s 00:07:44.939 10:53:59 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.939 10:53:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.939 ************************************ 00:07:44.939 END TEST accel 00:07:44.939 ************************************ 00:07:44.939 10:53:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.939 10:53:59 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:44.939 10:53:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.939 10:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.939 10:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:44.939 ************************************ 00:07:44.939 START TEST accel_rpc 00:07:44.939 ************************************ 00:07:44.939 10:53:59 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:45.198 * Looking for test storage... 00:07:45.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:45.198 10:53:59 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:45.198 10:53:59 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=130917 00:07:45.198 10:53:59 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:45.198 10:53:59 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 130917 00:07:45.198 10:53:59 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 130917 ']' 00:07:45.198 10:53:59 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.198 10:53:59 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.198 10:53:59 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.198 10:53:59 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.198 10:53:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.198 [2024-07-11 10:53:59.459668] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:45.198 [2024-07-11 10:53:59.459774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130917 ] 00:07:45.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.198 [2024-07-11 10:53:59.517131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.198 [2024-07-11 10:53:59.601133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.456 10:53:59 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.456 10:53:59 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:45.456 10:53:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:45.456 10:53:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:45.456 10:53:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:45.456 10:53:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:45.456 10:53:59 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:45.456 10:53:59 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.456 10:53:59 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.456 10:53:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.456 ************************************ 00:07:45.456 START TEST accel_assign_opcode 00:07:45.456 ************************************ 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.456 [2024-07-11 10:53:59.689814] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.456 [2024-07-11 10:53:59.697833] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.456 10:53:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:45.457 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.457 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.714 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.714 10:53:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:45.714 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.714 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.715 10:53:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:45.715 10:53:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:45.715 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.715 software 00:07:45.715 00:07:45.715 real 0m0.278s 00:07:45.715 user 0m0.040s 00:07:45.715 sys 0m0.006s 00:07:45.715 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.715 10:53:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.715 ************************************ 00:07:45.715 END TEST accel_assign_opcode 00:07:45.715 ************************************ 00:07:45.715 10:53:59 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:45.715 10:53:59 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 130917 00:07:45.715 10:53:59 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 130917 ']' 00:07:45.715 10:53:59 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 130917 00:07:45.715 10:53:59 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:45.715 10:53:59 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.715 10:53:59 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130917 00:07:45.715 10:54:00 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.715 10:54:00 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.715 10:54:00 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130917' 00:07:45.715 killing process with pid 130917 00:07:45.715 10:54:00 accel_rpc -- common/autotest_common.sh@967 -- # kill 130917 00:07:45.715 10:54:00 accel_rpc -- common/autotest_common.sh@972 -- # wait 130917 00:07:45.974 00:07:45.974 real 0m1.018s 00:07:45.974 user 0m0.958s 00:07:45.974 sys 0m0.408s 00:07:45.974 10:54:00 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.974 10:54:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.974 ************************************ 00:07:45.974 END TEST accel_rpc 00:07:45.974 ************************************ 00:07:45.974 10:54:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:45.974 10:54:00 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:45.974 10:54:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.974 10:54:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.974 10:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.233 ************************************ 00:07:46.233 START TEST app_cmdline 00:07:46.233 ************************************ 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.233 * Looking for test storage... 00:07:46.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.233 10:54:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:46.233 10:54:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=131167 00:07:46.233 10:54:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:46.233 10:54:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 131167 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 131167 ']' 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.233 10:54:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.234 [2024-07-11 10:54:00.524839] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:07:46.234 [2024-07-11 10:54:00.524939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131167 ] 00:07:46.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.234 [2024-07-11 10:54:00.583912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.492 [2024-07-11 10:54:00.669508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.492 10:54:00 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.492 10:54:00 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:46.492 10:54:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:46.751 { 00:07:46.751 "version": "SPDK v24.09-pre git sha1 e64f085ad", 00:07:46.751 "fields": { 00:07:46.751 "major": 24, 00:07:46.751 "minor": 9, 00:07:46.751 "patch": 0, 00:07:46.751 "suffix": "-pre", 00:07:46.751 "commit": "e64f085ad" 00:07:46.751 } 00:07:46.751 } 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:46.751 10:54:01 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.751 10:54:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.751 10:54:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:46.751 10:54:01 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.010 10:54:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:47.010 10:54:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:47.010 10:54:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.010 request: 00:07:47.010 { 00:07:47.010 "method": "env_dpdk_get_mem_stats", 00:07:47.010 "req_id": 1 00:07:47.010 } 00:07:47.010 Got JSON-RPC error response 00:07:47.010 response: 00:07:47.010 { 00:07:47.010 "code": -32601, 00:07:47.010 "message": "Method not found" 00:07:47.010 } 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.010 10:54:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 131167 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 131167 ']' 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 131167 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.010 10:54:01 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131167 00:07:47.269 10:54:01 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.269 10:54:01 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.269 10:54:01 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131167' 00:07:47.269 killing process with pid 131167 00:07:47.269 10:54:01 app_cmdline -- common/autotest_common.sh@967 -- # kill 131167 00:07:47.269 10:54:01 app_cmdline -- common/autotest_common.sh@972 -- # wait 131167 00:07:47.528 00:07:47.528 real 0m1.408s 00:07:47.528 user 0m1.734s 00:07:47.528 sys 0m0.436s 00:07:47.528 10:54:01 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.528 10:54:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.528 ************************************ 00:07:47.528 END TEST app_cmdline 00:07:47.528 ************************************ 00:07:47.528 10:54:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:47.528 10:54:01 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.528 10:54:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.528 10:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.528 10:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.528 ************************************ 00:07:47.528 START TEST version 00:07:47.528 ************************************ 00:07:47.528 10:54:01 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.528 * Looking for test storage... 00:07:47.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:47.528 10:54:01 version -- app/version.sh@17 -- # get_header_version major 00:07:47.528 10:54:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.528 10:54:01 version -- app/version.sh@17 -- # major=24 00:07:47.528 10:54:01 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.528 10:54:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.528 10:54:01 version -- app/version.sh@18 -- # minor=9 00:07:47.528 10:54:01 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.528 10:54:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.528 10:54:01 version -- app/version.sh@19 -- # patch=0 00:07:47.528 10:54:01 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.528 10:54:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # cut -f2 00:07:47.528 10:54:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.528 10:54:01 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.528 10:54:01 version -- app/version.sh@22 -- # version=24.9 00:07:47.528 10:54:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.528 10:54:01 version -- app/version.sh@28 -- # version=24.9rc0 00:07:47.528 10:54:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:47.528 10:54:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.788 10:54:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:47.788 10:54:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:47.788 00:07:47.788 real 0m0.106s 00:07:47.788 user 0m0.059s 00:07:47.788 sys 0m0.068s 00:07:47.788 10:54:01 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.788 10:54:01 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.788 ************************************ 00:07:47.788 END TEST version 00:07:47.788 ************************************ 00:07:47.788 10:54:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:47.788 10:54:02 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@198 -- # uname -s 00:07:47.788 10:54:02 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:47.788 10:54:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:47.788 10:54:02 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:47.788 10:54:02 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:47.788 10:54:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.788 10:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:47.788 10:54:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:47.788 10:54:02 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:47.788 10:54:02 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.788 10:54:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.788 10:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.788 10:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:47.788 ************************************ 00:07:47.788 START TEST nvmf_tcp 00:07:47.789 ************************************ 00:07:47.789 10:54:02 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.789 * Looking for test storage... 00:07:47.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.789 10:54:02 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.789 10:54:02 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.789 10:54:02 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.789 10:54:02 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:47.789 10:54:02 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:47.789 10:54:02 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.789 10:54:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:47.789 10:54:02 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:47.789 10:54:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.789 10:54:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.789 10:54:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.789 ************************************ 00:07:47.789 START TEST nvmf_example 00:07:47.789 ************************************ 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:47.789 * Looking for test storage... 00:07:47.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.789 10:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.050 10:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.963 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.963 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.963 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.963 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.963 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:07:49.964 00:07:49.964 --- 10.0.0.2 ping statistics --- 00:07:49.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.964 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:07:49.964 00:07:49.964 --- 10.0.0.1 ping statistics --- 00:07:49.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.964 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=133135 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 133135 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 133135 ']' 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.964 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:49.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:50.223 10:54:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:50.484 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.477 Initializing NVMe Controllers 00:08:00.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:00.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:00.477 Initialization complete. Launching workers. 00:08:00.477 ======================================================== 00:08:00.477 Latency(us) 00:08:00.477 Device Information : IOPS MiB/s Average min max 00:08:00.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14893.40 58.18 4297.62 882.81 19075.07 00:08:00.477 ======================================================== 00:08:00.477 Total : 14893.40 58.18 4297.62 882.81 19075.07 00:08:00.477 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.477 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.477 rmmod nvme_tcp 00:08:00.477 rmmod nvme_fabrics 00:08:00.477 rmmod nvme_keyring 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 133135 ']' 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 133135 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 133135 ']' 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 133135 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133135 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133135' 00:08:00.736 killing process with pid 133135 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 133135 00:08:00.736 10:54:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 133135 00:08:00.736 nvmf threads initialize successfully 00:08:00.736 bdev subsystem init successfully 00:08:00.736 created a nvmf target service 00:08:00.736 create targets's poll groups done 00:08:00.736 all subsystems of target started 00:08:00.736 nvmf target is running 00:08:00.736 all subsystems of target stopped 00:08:00.736 destroy targets's poll groups done 00:08:00.736 destroyed the nvmf target service 00:08:00.736 bdev subsystem finish successfully 00:08:00.736 nvmf threads destroy successfully 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.736 10:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.284 10:54:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.284 10:54:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:03.284 10:54:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.284 10:54:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.284 00:08:03.284 real 0m15.070s 00:08:03.284 user 0m41.939s 00:08:03.284 sys 0m3.212s 00:08:03.284 10:54:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.284 10:54:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.284 ************************************ 00:08:03.284 END TEST nvmf_example 00:08:03.284 ************************************ 00:08:03.284 10:54:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.284 10:54:17 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.284 10:54:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.284 10:54:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.284 10:54:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.284 ************************************ 00:08:03.284 START TEST nvmf_filesystem 00:08:03.284 ************************************ 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.284 * Looking for test storage... 00:08:03.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:03.284 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:03.285 #define SPDK_CONFIG_H 00:08:03.285 #define SPDK_CONFIG_APPS 1 00:08:03.285 #define SPDK_CONFIG_ARCH native 00:08:03.285 #undef SPDK_CONFIG_ASAN 00:08:03.285 #undef SPDK_CONFIG_AVAHI 00:08:03.285 #undef SPDK_CONFIG_CET 00:08:03.285 #define SPDK_CONFIG_COVERAGE 1 00:08:03.285 #define SPDK_CONFIG_CROSS_PREFIX 00:08:03.285 #undef SPDK_CONFIG_CRYPTO 00:08:03.285 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:03.285 #undef SPDK_CONFIG_CUSTOMOCF 00:08:03.285 #undef SPDK_CONFIG_DAOS 00:08:03.285 #define SPDK_CONFIG_DAOS_DIR 00:08:03.285 #define SPDK_CONFIG_DEBUG 1 00:08:03.285 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:03.285 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:03.285 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:03.285 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.285 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:03.285 #undef SPDK_CONFIG_DPDK_UADK 00:08:03.285 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.285 #define SPDK_CONFIG_EXAMPLES 1 00:08:03.285 #undef SPDK_CONFIG_FC 00:08:03.285 #define SPDK_CONFIG_FC_PATH 00:08:03.285 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:03.285 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:03.285 #undef SPDK_CONFIG_FUSE 00:08:03.285 #undef SPDK_CONFIG_FUZZER 00:08:03.285 #define SPDK_CONFIG_FUZZER_LIB 00:08:03.285 #undef SPDK_CONFIG_GOLANG 00:08:03.285 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:03.285 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:03.285 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:03.285 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:03.285 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:03.285 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:03.285 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:03.285 #define SPDK_CONFIG_IDXD 1 00:08:03.285 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:03.285 #undef SPDK_CONFIG_IPSEC_MB 00:08:03.285 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:03.285 #define SPDK_CONFIG_ISAL 1 00:08:03.285 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:03.285 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:03.285 #define SPDK_CONFIG_LIBDIR 00:08:03.285 #undef SPDK_CONFIG_LTO 00:08:03.285 #define SPDK_CONFIG_MAX_LCORES 128 00:08:03.285 #define SPDK_CONFIG_NVME_CUSE 1 00:08:03.285 #undef SPDK_CONFIG_OCF 00:08:03.285 #define SPDK_CONFIG_OCF_PATH 00:08:03.285 #define SPDK_CONFIG_OPENSSL_PATH 00:08:03.285 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:03.285 #define SPDK_CONFIG_PGO_DIR 00:08:03.285 #undef SPDK_CONFIG_PGO_USE 00:08:03.285 #define SPDK_CONFIG_PREFIX /usr/local 00:08:03.285 #undef SPDK_CONFIG_RAID5F 00:08:03.285 #undef SPDK_CONFIG_RBD 00:08:03.285 #define SPDK_CONFIG_RDMA 1 00:08:03.285 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:03.285 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:03.285 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:03.285 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:03.285 #define SPDK_CONFIG_SHARED 1 00:08:03.285 #undef SPDK_CONFIG_SMA 00:08:03.285 #define SPDK_CONFIG_TESTS 1 00:08:03.285 #undef SPDK_CONFIG_TSAN 00:08:03.285 #define SPDK_CONFIG_UBLK 1 00:08:03.285 #define SPDK_CONFIG_UBSAN 1 00:08:03.285 #undef SPDK_CONFIG_UNIT_TESTS 00:08:03.285 #undef SPDK_CONFIG_URING 00:08:03.285 #define SPDK_CONFIG_URING_PATH 00:08:03.285 #undef SPDK_CONFIG_URING_ZNS 00:08:03.285 #undef SPDK_CONFIG_USDT 00:08:03.285 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:03.285 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:03.285 #define SPDK_CONFIG_VFIO_USER 1 00:08:03.285 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:03.285 #define SPDK_CONFIG_VHOST 1 00:08:03.285 #define SPDK_CONFIG_VIRTIO 1 00:08:03.285 #undef SPDK_CONFIG_VTUNE 00:08:03.285 #define SPDK_CONFIG_VTUNE_DIR 00:08:03.285 #define SPDK_CONFIG_WERROR 1 00:08:03.285 #define SPDK_CONFIG_WPDK_DIR 00:08:03.285 #undef SPDK_CONFIG_XNVME 00:08:03.285 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.285 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:03.286 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 135334 ]] 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 135334 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.xHiNn7 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xHiNn7/tests/target /tmp/spdk.xHiNn7 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:03.287 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=54585049088 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994725376 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7409676288 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30993985536 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390191104 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8757248 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30997053440 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=311296 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:03.288 * Looking for test storage... 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=54585049088 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=9624268800 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.288 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.289 10:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:05.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:05.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:05.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:05.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.190 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:05.448 00:08:05.448 --- 10.0.0.2 ping statistics --- 00:08:05.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.448 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:08:05.448 00:08:05.448 --- 10.0.0.1 ping statistics --- 00:08:05.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.448 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 ************************************ 00:08:05.448 START TEST nvmf_filesystem_no_in_capsule 00:08:05.448 ************************************ 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=136967 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 136967 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 136967 ']' 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.448 10:54:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.448 [2024-07-11 10:54:19.788241] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:08:05.448 [2024-07-11 10:54:19.788316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.448 [2024-07-11 10:54:19.851872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.708 [2024-07-11 10:54:19.941827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.708 [2024-07-11 10:54:19.941875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.708 [2024-07-11 10:54:19.941905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.708 [2024-07-11 10:54:19.941917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.708 [2024-07-11 10:54:19.941927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.708 [2024-07-11 10:54:19.942067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.708 [2024-07-11 10:54:19.942133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.708 [2024-07-11 10:54:19.942209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.708 [2024-07-11 10:54:19.942211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.708 [2024-07-11 10:54:20.101442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.708 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.965 Malloc1 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.965 [2024-07-11 10:54:20.292229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:05.965 { 00:08:05.965 "name": "Malloc1", 00:08:05.965 "aliases": [ 00:08:05.965 "734b13d5-7b50-4b7b-bb79-c560353c8a07" 00:08:05.965 ], 00:08:05.965 "product_name": "Malloc disk", 00:08:05.965 "block_size": 512, 00:08:05.965 "num_blocks": 1048576, 00:08:05.965 "uuid": "734b13d5-7b50-4b7b-bb79-c560353c8a07", 00:08:05.965 "assigned_rate_limits": { 00:08:05.965 "rw_ios_per_sec": 0, 00:08:05.965 "rw_mbytes_per_sec": 0, 00:08:05.965 "r_mbytes_per_sec": 0, 00:08:05.965 "w_mbytes_per_sec": 0 00:08:05.965 }, 00:08:05.965 "claimed": true, 00:08:05.965 "claim_type": "exclusive_write", 00:08:05.965 "zoned": false, 00:08:05.965 "supported_io_types": { 00:08:05.965 "read": true, 00:08:05.965 "write": true, 00:08:05.965 "unmap": true, 00:08:05.965 "flush": true, 00:08:05.965 "reset": true, 00:08:05.965 "nvme_admin": false, 00:08:05.965 "nvme_io": false, 00:08:05.965 "nvme_io_md": false, 00:08:05.965 "write_zeroes": true, 00:08:05.965 "zcopy": true, 00:08:05.965 "get_zone_info": false, 00:08:05.965 "zone_management": false, 00:08:05.965 "zone_append": false, 00:08:05.965 "compare": false, 00:08:05.965 "compare_and_write": false, 00:08:05.965 "abort": true, 00:08:05.965 "seek_hole": false, 00:08:05.965 "seek_data": false, 00:08:05.965 "copy": true, 00:08:05.965 "nvme_iov_md": false 00:08:05.965 }, 00:08:05.965 "memory_domains": [ 00:08:05.965 { 00:08:05.965 "dma_device_id": "system", 00:08:05.965 "dma_device_type": 1 00:08:05.965 }, 00:08:05.965 { 00:08:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.965 "dma_device_type": 2 00:08:05.965 } 00:08:05.965 ], 00:08:05.965 "driver_specific": {} 00:08:05.965 } 00:08:05.965 ]' 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:05.965 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:06.225 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:06.225 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:06.225 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:06.225 10:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.834 10:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.834 10:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:06.834 10:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.834 10:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:06.834 10:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:08.745 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:09.317 10:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:09.890 10:54:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.836 ************************************ 00:08:10.836 START TEST filesystem_ext4 00:08:10.836 ************************************ 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:10.836 10:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:10.836 mke2fs 1.46.5 (30-Dec-2021) 00:08:10.836 Discarding device blocks: 0/522240 done 00:08:11.098 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:11.098 Filesystem UUID: 96259307-9b4b-40dd-97b0-96a1228bb95c 00:08:11.098 Superblock backups stored on blocks: 00:08:11.098 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:11.098 00:08:11.098 Allocating group tables: 0/64 done 00:08:11.098 Writing inode tables: 0/64 done 00:08:11.098 Creating journal (8192 blocks): done 00:08:11.929 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:08:11.929 00:08:11.929 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:11.929 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 136967 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.189 00:08:12.189 real 0m1.448s 00:08:12.189 user 0m0.017s 00:08:12.189 sys 0m0.059s 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 ************************************ 00:08:12.189 END TEST filesystem_ext4 00:08:12.189 ************************************ 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.189 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.448 ************************************ 00:08:12.448 START TEST filesystem_btrfs 00:08:12.448 ************************************ 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.448 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:12.708 btrfs-progs v6.6.2 00:08:12.708 See https://btrfs.readthedocs.io for more information. 00:08:12.708 00:08:12.708 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:12.708 NOTE: several default settings have changed in version 5.15, please make sure 00:08:12.708 this does not affect your deployments: 00:08:12.708 - DUP for metadata (-m dup) 00:08:12.708 - enabled no-holes (-O no-holes) 00:08:12.708 - enabled free-space-tree (-R free-space-tree) 00:08:12.708 00:08:12.708 Label: (null) 00:08:12.708 UUID: 0385746e-ab6d-44a7-b4af-da47bea0b18b 00:08:12.708 Node size: 16384 00:08:12.708 Sector size: 4096 00:08:12.708 Filesystem size: 510.00MiB 00:08:12.708 Block group profiles: 00:08:12.708 Data: single 8.00MiB 00:08:12.708 Metadata: DUP 32.00MiB 00:08:12.708 System: DUP 8.00MiB 00:08:12.708 SSD detected: yes 00:08:12.708 Zoned device: no 00:08:12.708 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:12.708 Runtime features: free-space-tree 00:08:12.708 Checksum: crc32c 00:08:12.708 Number of devices: 1 00:08:12.708 Devices: 00:08:12.708 ID SIZE PATH 00:08:12.708 1 510.00MiB /dev/nvme0n1p1 00:08:12.708 00:08:12.708 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:12.708 10:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 136967 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.967 00:08:12.967 real 0m0.717s 00:08:12.967 user 0m0.016s 00:08:12.967 sys 0m0.153s 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 ************************************ 00:08:12.967 END TEST filesystem_btrfs 00:08:12.967 ************************************ 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.967 ************************************ 00:08:12.967 START TEST filesystem_xfs 00:08:12.967 ************************************ 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.967 10:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:13.226 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:13.226 = sectsz=512 attr=2, projid32bit=1 00:08:13.226 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:13.226 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:13.226 data = bsize=4096 blocks=130560, imaxpct=25 00:08:13.226 = sunit=0 swidth=0 blks 00:08:13.226 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:13.226 log =internal log bsize=4096 blocks=16384, version=2 00:08:13.226 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:13.226 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:14.165 Discarding blocks...Done. 00:08:14.165 10:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:14.165 10:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 136967 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.466 00:08:17.466 real 0m3.893s 00:08:17.466 user 0m0.013s 00:08:17.466 sys 0m0.099s 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.466 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 ************************************ 00:08:17.466 END TEST filesystem_xfs 00:08:17.466 ************************************ 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 136967 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 136967 ']' 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 136967 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136967 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136967' 00:08:17.467 killing process with pid 136967 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 136967 00:08:17.467 10:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 136967 00:08:17.727 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:17.728 00:08:17.728 real 0m12.388s 00:08:17.728 user 0m47.556s 00:08:17.728 sys 0m1.933s 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.728 ************************************ 00:08:17.728 END TEST nvmf_filesystem_no_in_capsule 00:08:17.728 ************************************ 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.728 10:54:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.990 ************************************ 00:08:17.990 START TEST nvmf_filesystem_in_capsule 00:08:17.990 ************************************ 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=138655 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 138655 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 138655 ']' 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.990 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.990 [2024-07-11 10:54:32.233009] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:08:17.990 [2024-07-11 10:54:32.233104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.990 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.990 [2024-07-11 10:54:32.301023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.990 [2024-07-11 10:54:32.390367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.990 [2024-07-11 10:54:32.390424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.990 [2024-07-11 10:54:32.390452] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.990 [2024-07-11 10:54:32.390463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.990 [2024-07-11 10:54:32.390473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.990 [2024-07-11 10:54:32.390554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.990 [2024-07-11 10:54:32.390623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.990 [2024-07-11 10:54:32.390645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.990 [2024-07-11 10:54:32.390649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.250 [2024-07-11 10:54:32.544660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.250 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 Malloc1 00:08:18.510 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.510 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.510 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.510 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.510 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.510 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 [2024-07-11 10:54:32.715119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:18.511 { 00:08:18.511 "name": "Malloc1", 00:08:18.511 "aliases": [ 00:08:18.511 "7db7318f-cd86-4a02-903b-5d42e2336cb1" 00:08:18.511 ], 00:08:18.511 "product_name": "Malloc disk", 00:08:18.511 "block_size": 512, 00:08:18.511 "num_blocks": 1048576, 00:08:18.511 "uuid": "7db7318f-cd86-4a02-903b-5d42e2336cb1", 00:08:18.511 "assigned_rate_limits": { 00:08:18.511 "rw_ios_per_sec": 0, 00:08:18.511 "rw_mbytes_per_sec": 0, 00:08:18.511 "r_mbytes_per_sec": 0, 00:08:18.511 "w_mbytes_per_sec": 0 00:08:18.511 }, 00:08:18.511 "claimed": true, 00:08:18.511 "claim_type": "exclusive_write", 00:08:18.511 "zoned": false, 00:08:18.511 "supported_io_types": { 00:08:18.511 "read": true, 00:08:18.511 "write": true, 00:08:18.511 "unmap": true, 00:08:18.511 "flush": true, 00:08:18.511 "reset": true, 00:08:18.511 "nvme_admin": false, 00:08:18.511 "nvme_io": false, 00:08:18.511 "nvme_io_md": false, 00:08:18.511 "write_zeroes": true, 00:08:18.511 "zcopy": true, 00:08:18.511 "get_zone_info": false, 00:08:18.511 "zone_management": false, 00:08:18.511 "zone_append": false, 00:08:18.511 "compare": false, 00:08:18.511 "compare_and_write": false, 00:08:18.511 "abort": true, 00:08:18.511 "seek_hole": false, 00:08:18.511 "seek_data": false, 00:08:18.511 "copy": true, 00:08:18.511 "nvme_iov_md": false 00:08:18.511 }, 00:08:18.511 "memory_domains": [ 00:08:18.511 { 00:08:18.511 "dma_device_id": "system", 00:08:18.511 "dma_device_type": 1 00:08:18.511 }, 00:08:18.511 { 00:08:18.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.511 "dma_device_type": 2 00:08:18.511 } 00:08:18.511 ], 00:08:18.511 "driver_specific": {} 00:08:18.511 } 00:08:18.511 ]' 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:18.511 10:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.083 10:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.083 10:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:19.083 10:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.083 10:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:19.083 10:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.621 10:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:21.882 10:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.820 ************************************ 00:08:22.820 START TEST filesystem_in_capsule_ext4 00:08:22.820 ************************************ 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:22.820 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:22.821 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:22.821 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:22.821 10:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:22.821 mke2fs 1.46.5 (30-Dec-2021) 00:08:23.079 Discarding device blocks: 0/522240 done 00:08:23.079 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:23.079 Filesystem UUID: 86fc7081-afff-44b0-945e-6c1e5d2957eb 00:08:23.079 Superblock backups stored on blocks: 00:08:23.079 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:23.079 00:08:23.079 Allocating group tables: 0/64 done 00:08:23.079 Writing inode tables: 0/64 done 00:08:23.339 Creating journal (8192 blocks): done 00:08:24.171 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:24.171 00:08:24.171 10:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:24.171 10:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 138655 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.112 00:08:25.112 real 0m2.075s 00:08:25.112 user 0m0.018s 00:08:25.112 sys 0m0.057s 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:25.112 ************************************ 00:08:25.112 END TEST filesystem_in_capsule_ext4 00:08:25.112 ************************************ 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.112 ************************************ 00:08:25.112 START TEST filesystem_in_capsule_btrfs 00:08:25.112 ************************************ 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:25.112 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.372 btrfs-progs v6.6.2 00:08:25.372 See https://btrfs.readthedocs.io for more information. 00:08:25.372 00:08:25.372 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.372 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.372 this does not affect your deployments: 00:08:25.372 - DUP for metadata (-m dup) 00:08:25.372 - enabled no-holes (-O no-holes) 00:08:25.372 - enabled free-space-tree (-R free-space-tree) 00:08:25.372 00:08:25.372 Label: (null) 00:08:25.372 UUID: d14d9ce5-3f2e-4eb9-b39a-90d56b7fd4c2 00:08:25.372 Node size: 16384 00:08:25.372 Sector size: 4096 00:08:25.372 Filesystem size: 510.00MiB 00:08:25.372 Block group profiles: 00:08:25.372 Data: single 8.00MiB 00:08:25.372 Metadata: DUP 32.00MiB 00:08:25.372 System: DUP 8.00MiB 00:08:25.372 SSD detected: yes 00:08:25.372 Zoned device: no 00:08:25.372 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.372 Runtime features: free-space-tree 00:08:25.372 Checksum: crc32c 00:08:25.372 Number of devices: 1 00:08:25.372 Devices: 00:08:25.372 ID SIZE PATH 00:08:25.372 1 510.00MiB /dev/nvme0n1p1 00:08:25.372 00:08:25.372 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:25.372 10:54:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 138655 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.940 00:08:25.940 real 0m0.997s 00:08:25.940 user 0m0.022s 00:08:25.940 sys 0m0.112s 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:25.940 ************************************ 00:08:25.940 END TEST filesystem_in_capsule_btrfs 00:08:25.940 ************************************ 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.940 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.200 ************************************ 00:08:26.200 START TEST filesystem_in_capsule_xfs 00:08:26.200 ************************************ 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:26.200 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:26.201 10:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:26.201 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:26.201 = sectsz=512 attr=2, projid32bit=1 00:08:26.201 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:26.201 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:26.201 data = bsize=4096 blocks=130560, imaxpct=25 00:08:26.201 = sunit=0 swidth=0 blks 00:08:26.201 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:26.201 log =internal log bsize=4096 blocks=16384, version=2 00:08:26.201 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:26.201 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:27.140 Discarding blocks...Done. 00:08:27.140 10:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:27.140 10:54:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 138655 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.683 00:08:29.683 real 0m3.601s 00:08:29.683 user 0m0.020s 00:08:29.683 sys 0m0.059s 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.683 10:54:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:29.683 ************************************ 00:08:29.683 END TEST filesystem_in_capsule_xfs 00:08:29.684 ************************************ 00:08:29.684 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:29.684 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:29.684 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:29.684 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 138655 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 138655 ']' 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 138655 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138655 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138655' 00:08:29.943 killing process with pid 138655 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 138655 00:08:29.943 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 138655 00:08:30.202 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:30.202 00:08:30.202 real 0m12.442s 00:08:30.202 user 0m47.789s 00:08:30.202 sys 0m1.880s 00:08:30.202 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.202 10:54:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.202 ************************************ 00:08:30.202 END TEST nvmf_filesystem_in_capsule 00:08:30.202 ************************************ 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.464 rmmod nvme_tcp 00:08:30.464 rmmod nvme_fabrics 00:08:30.464 rmmod nvme_keyring 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.464 10:54:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.380 10:54:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.380 00:08:32.380 real 0m29.467s 00:08:32.380 user 1m36.280s 00:08:32.380 sys 0m5.519s 00:08:32.380 10:54:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.380 10:54:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.380 ************************************ 00:08:32.380 END TEST nvmf_filesystem 00:08:32.380 ************************************ 00:08:32.380 10:54:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:32.380 10:54:46 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:32.380 10:54:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:32.380 10:54:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.380 10:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.380 ************************************ 00:08:32.380 START TEST nvmf_target_discovery 00:08:32.380 ************************************ 00:08:32.380 10:54:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:32.642 * Looking for test storage... 00:08:32.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.642 10:54:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.643 10:54:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.557 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:34.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:34.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:34.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:34.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.558 10:54:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:08:34.819 00:08:34.819 --- 10.0.0.2 ping statistics --- 00:08:34.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.819 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:08:34.819 00:08:34.819 --- 10.0.0.1 ping statistics --- 00:08:34.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.819 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=142192 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 142192 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 142192 ']' 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.819 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.819 [2024-07-11 10:54:49.167399] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:08:34.819 [2024-07-11 10:54:49.167488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.819 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.819 [2024-07-11 10:54:49.230302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.079 [2024-07-11 10:54:49.313231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.079 [2024-07-11 10:54:49.313285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.079 [2024-07-11 10:54:49.313313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.080 [2024-07-11 10:54:49.313323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.080 [2024-07-11 10:54:49.313333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.080 [2024-07-11 10:54:49.313483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.080 [2024-07-11 10:54:49.313588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.080 [2024-07-11 10:54:49.313679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.080 [2024-07-11 10:54:49.313682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 [2024-07-11 10:54:49.466638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 Null1 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.080 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 [2024-07-11 10:54:49.514979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 Null2 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 Null3 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 Null4 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.342 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:35.603 00:08:35.603 Discovery Log Number of Records 6, Generation counter 6 00:08:35.603 =====Discovery Log Entry 0====== 00:08:35.603 trtype: tcp 00:08:35.603 adrfam: ipv4 00:08:35.603 subtype: current discovery subsystem 00:08:35.603 treq: not required 00:08:35.603 portid: 0 00:08:35.603 trsvcid: 4420 00:08:35.603 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:35.603 traddr: 10.0.0.2 00:08:35.603 eflags: explicit discovery connections, duplicate discovery information 00:08:35.603 sectype: none 00:08:35.603 =====Discovery Log Entry 1====== 00:08:35.603 trtype: tcp 00:08:35.603 adrfam: ipv4 00:08:35.603 subtype: nvme subsystem 00:08:35.603 treq: not required 00:08:35.603 portid: 0 00:08:35.603 trsvcid: 4420 00:08:35.603 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:35.603 traddr: 10.0.0.2 00:08:35.603 eflags: none 00:08:35.603 sectype: none 00:08:35.603 =====Discovery Log Entry 2====== 00:08:35.603 trtype: tcp 00:08:35.603 adrfam: ipv4 00:08:35.603 subtype: nvme subsystem 00:08:35.603 treq: not required 00:08:35.603 portid: 0 00:08:35.603 trsvcid: 4420 00:08:35.603 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:35.603 traddr: 10.0.0.2 00:08:35.603 eflags: none 00:08:35.603 sectype: none 00:08:35.603 =====Discovery Log Entry 3====== 00:08:35.603 trtype: tcp 00:08:35.603 adrfam: ipv4 00:08:35.603 subtype: nvme subsystem 00:08:35.603 treq: not required 00:08:35.603 portid: 0 00:08:35.603 trsvcid: 4420 00:08:35.603 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:35.603 traddr: 10.0.0.2 00:08:35.603 eflags: none 00:08:35.603 sectype: none 00:08:35.603 =====Discovery Log Entry 4====== 00:08:35.603 trtype: tcp 00:08:35.603 adrfam: ipv4 00:08:35.603 subtype: nvme subsystem 00:08:35.603 treq: not required 00:08:35.603 portid: 0 00:08:35.603 trsvcid: 4420 00:08:35.603 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:35.603 traddr: 10.0.0.2 00:08:35.603 eflags: none 00:08:35.603 sectype: none 00:08:35.603 =====Discovery Log Entry 5====== 00:08:35.603 trtype: tcp 00:08:35.603 adrfam: ipv4 00:08:35.603 subtype: discovery subsystem referral 00:08:35.603 treq: not required 00:08:35.604 portid: 0 00:08:35.604 trsvcid: 4430 00:08:35.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:35.604 traddr: 10.0.0.2 00:08:35.604 eflags: none 00:08:35.604 sectype: none 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:35.604 Perform nvmf subsystem discovery via RPC 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 [ 00:08:35.604 { 00:08:35.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:35.604 "subtype": "Discovery", 00:08:35.604 "listen_addresses": [ 00:08:35.604 { 00:08:35.604 "trtype": "TCP", 00:08:35.604 "adrfam": "IPv4", 00:08:35.604 "traddr": "10.0.0.2", 00:08:35.604 "trsvcid": "4420" 00:08:35.604 } 00:08:35.604 ], 00:08:35.604 "allow_any_host": true, 00:08:35.604 "hosts": [] 00:08:35.604 }, 00:08:35.604 { 00:08:35.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.604 "subtype": "NVMe", 00:08:35.604 "listen_addresses": [ 00:08:35.604 { 00:08:35.604 "trtype": "TCP", 00:08:35.604 "adrfam": "IPv4", 00:08:35.604 "traddr": "10.0.0.2", 00:08:35.604 "trsvcid": "4420" 00:08:35.604 } 00:08:35.604 ], 00:08:35.604 "allow_any_host": true, 00:08:35.604 "hosts": [], 00:08:35.604 "serial_number": "SPDK00000000000001", 00:08:35.604 "model_number": "SPDK bdev Controller", 00:08:35.604 "max_namespaces": 32, 00:08:35.604 "min_cntlid": 1, 00:08:35.604 "max_cntlid": 65519, 00:08:35.604 "namespaces": [ 00:08:35.604 { 00:08:35.604 "nsid": 1, 00:08:35.604 "bdev_name": "Null1", 00:08:35.604 "name": "Null1", 00:08:35.604 "nguid": "4EDB3955FA2341C3856A2059AD48CACB", 00:08:35.604 "uuid": "4edb3955-fa23-41c3-856a-2059ad48cacb" 00:08:35.604 } 00:08:35.604 ] 00:08:35.604 }, 00:08:35.604 { 00:08:35.604 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:35.604 "subtype": "NVMe", 00:08:35.604 "listen_addresses": [ 00:08:35.604 { 00:08:35.604 "trtype": "TCP", 00:08:35.604 "adrfam": "IPv4", 00:08:35.604 "traddr": "10.0.0.2", 00:08:35.604 "trsvcid": "4420" 00:08:35.604 } 00:08:35.604 ], 00:08:35.604 "allow_any_host": true, 00:08:35.604 "hosts": [], 00:08:35.604 "serial_number": "SPDK00000000000002", 00:08:35.604 "model_number": "SPDK bdev Controller", 00:08:35.604 "max_namespaces": 32, 00:08:35.604 "min_cntlid": 1, 00:08:35.604 "max_cntlid": 65519, 00:08:35.604 "namespaces": [ 00:08:35.604 { 00:08:35.604 "nsid": 1, 00:08:35.604 "bdev_name": "Null2", 00:08:35.604 "name": "Null2", 00:08:35.604 "nguid": "B2838633B51640C3B5A7D80E5411D059", 00:08:35.604 "uuid": "b2838633-b516-40c3-b5a7-d80e5411d059" 00:08:35.604 } 00:08:35.604 ] 00:08:35.604 }, 00:08:35.604 { 00:08:35.604 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:35.604 "subtype": "NVMe", 00:08:35.604 "listen_addresses": [ 00:08:35.604 { 00:08:35.604 "trtype": "TCP", 00:08:35.604 "adrfam": "IPv4", 00:08:35.604 "traddr": "10.0.0.2", 00:08:35.604 "trsvcid": "4420" 00:08:35.604 } 00:08:35.604 ], 00:08:35.604 "allow_any_host": true, 00:08:35.604 "hosts": [], 00:08:35.604 "serial_number": "SPDK00000000000003", 00:08:35.604 "model_number": "SPDK bdev Controller", 00:08:35.604 "max_namespaces": 32, 00:08:35.604 "min_cntlid": 1, 00:08:35.604 "max_cntlid": 65519, 00:08:35.604 "namespaces": [ 00:08:35.604 { 00:08:35.604 "nsid": 1, 00:08:35.604 "bdev_name": "Null3", 00:08:35.604 "name": "Null3", 00:08:35.604 "nguid": "760EB5A6066D411789CB54E3F515489D", 00:08:35.604 "uuid": "760eb5a6-066d-4117-89cb-54e3f515489d" 00:08:35.604 } 00:08:35.604 ] 00:08:35.604 }, 00:08:35.604 { 00:08:35.604 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:35.604 "subtype": "NVMe", 00:08:35.604 "listen_addresses": [ 00:08:35.604 { 00:08:35.604 "trtype": "TCP", 00:08:35.604 "adrfam": "IPv4", 00:08:35.604 "traddr": "10.0.0.2", 00:08:35.604 "trsvcid": "4420" 00:08:35.604 } 00:08:35.604 ], 00:08:35.604 "allow_any_host": true, 00:08:35.604 "hosts": [], 00:08:35.604 "serial_number": "SPDK00000000000004", 00:08:35.604 "model_number": "SPDK bdev Controller", 00:08:35.604 "max_namespaces": 32, 00:08:35.604 "min_cntlid": 1, 00:08:35.604 "max_cntlid": 65519, 00:08:35.604 "namespaces": [ 00:08:35.604 { 00:08:35.604 "nsid": 1, 00:08:35.604 "bdev_name": "Null4", 00:08:35.604 "name": "Null4", 00:08:35.604 "nguid": "3E07C7DCBBA04CF38AE3E9E7DB740EEB", 00:08:35.604 "uuid": "3e07c7dc-bba0-4cf3-8ae3-e9e7db740eeb" 00:08:35.604 } 00:08:35.604 ] 00:08:35.604 } 00:08:35.604 ] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.604 rmmod nvme_tcp 00:08:35.604 rmmod nvme_fabrics 00:08:35.604 rmmod nvme_keyring 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 142192 ']' 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 142192 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 142192 ']' 00:08:35.604 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 142192 00:08:35.605 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:35.605 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.605 10:54:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142192 00:08:35.605 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.605 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.605 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142192' 00:08:35.605 killing process with pid 142192 00:08:35.605 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 142192 00:08:35.605 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 142192 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.864 10:54:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.413 10:54:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.413 00:08:38.413 real 0m5.484s 00:08:38.413 user 0m4.434s 00:08:38.413 sys 0m1.878s 00:08:38.413 10:54:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.413 10:54:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:38.413 ************************************ 00:08:38.413 END TEST nvmf_target_discovery 00:08:38.413 ************************************ 00:08:38.413 10:54:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.413 10:54:52 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.413 10:54:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.413 10:54:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.413 10:54:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.413 ************************************ 00:08:38.413 START TEST nvmf_referrals 00:08:38.413 ************************************ 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.413 * Looking for test storage... 00:08:38.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.413 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.414 10:54:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:40.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:40.327 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:40.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.327 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:40.327 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:40.328 00:08:40.328 --- 10.0.0.2 ping statistics --- 00:08:40.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.328 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:40.328 00:08:40.328 --- 10.0.0.1 ping statistics --- 00:08:40.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.328 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=144234 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 144234 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 144234 ']' 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.328 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.328 [2024-07-11 10:54:54.692155] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:08:40.328 [2024-07-11 10:54:54.692241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.328 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.589 [2024-07-11 10:54:54.758664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.589 [2024-07-11 10:54:54.846340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.589 [2024-07-11 10:54:54.846402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.589 [2024-07-11 10:54:54.846431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.589 [2024-07-11 10:54:54.846442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.589 [2024-07-11 10:54:54.846452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.589 [2024-07-11 10:54:54.846537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.589 [2024-07-11 10:54:54.846604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.589 [2024-07-11 10:54:54.846653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.589 [2024-07-11 10:54:54.846656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.589 10:54:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.589 [2024-07-11 10:54:54.999649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.589 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.589 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:40.589 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.589 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.589 [2024-07-11 10:54:55.011901] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.848 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:41.107 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.108 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.367 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.628 10:54:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.890 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.150 rmmod nvme_tcp 00:08:42.150 rmmod nvme_fabrics 00:08:42.150 rmmod nvme_keyring 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 144234 ']' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 144234 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 144234 ']' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 144234 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144234 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144234' 00:08:42.150 killing process with pid 144234 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 144234 00:08:42.150 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 144234 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.410 10:54:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.952 10:54:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.952 00:08:44.952 real 0m6.518s 00:08:44.952 user 0m9.144s 00:08:44.952 sys 0m2.162s 00:08:44.952 10:54:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.952 10:54:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.952 ************************************ 00:08:44.952 END TEST nvmf_referrals 00:08:44.952 ************************************ 00:08:44.952 10:54:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.952 10:54:58 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:44.952 10:54:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.952 10:54:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.952 10:54:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.952 ************************************ 00:08:44.952 START TEST nvmf_connect_disconnect 00:08:44.952 ************************************ 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:44.952 * Looking for test storage... 00:08:44.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.952 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.953 10:54:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:46.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:46.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:46.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:46.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:08:46.861 00:08:46.861 --- 10.0.0.2 ping statistics --- 00:08:46.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.861 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:08:46.861 00:08:46.861 --- 10.0.0.1 ping statistics --- 00:08:46.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.861 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.861 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=146529 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 146529 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 146529 ']' 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:47.121 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.121 [2024-07-11 10:55:01.341041] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:08:47.121 [2024-07-11 10:55:01.341122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.121 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.121 [2024-07-11 10:55:01.401942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.121 [2024-07-11 10:55:01.483125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.121 [2024-07-11 10:55:01.483177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.121 [2024-07-11 10:55:01.483204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.121 [2024-07-11 10:55:01.483215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.121 [2024-07-11 10:55:01.483224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.121 [2024-07-11 10:55:01.483303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.121 [2024-07-11 10:55:01.483422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.121 [2024-07-11 10:55:01.483515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.121 [2024-07-11 10:55:01.483518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 [2024-07-11 10:55:01.637494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 [2024-07-11 10:55:01.689460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:47.383 10:55:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:49.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.713 rmmod nvme_tcp 00:12:37.713 rmmod nvme_fabrics 00:12:37.713 rmmod nvme_keyring 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 146529 ']' 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 146529 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 146529 ']' 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 146529 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146529 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146529' 00:12:37.713 killing process with pid 146529 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 146529 00:12:37.713 10:58:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 146529 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.974 10:58:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.882 10:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.882 00:12:39.882 real 3m55.326s 00:12:39.882 user 14m53.566s 00:12:39.882 sys 0m36.793s 00:12:39.882 10:58:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.882 10:58:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.882 ************************************ 00:12:39.882 END TEST nvmf_connect_disconnect 00:12:39.882 ************************************ 00:12:39.882 10:58:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.882 10:58:54 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:39.882 10:58:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.882 10:58:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.882 10:58:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.882 ************************************ 00:12:39.882 START TEST nvmf_multitarget 00:12:39.882 ************************************ 00:12:39.882 10:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:40.142 * Looking for test storage... 00:12:40.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:40.142 10:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:42.052 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:42.052 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:42.052 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:42.052 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.052 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:42.312 00:12:42.312 --- 10.0.0.2 ping statistics --- 00:12:42.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.312 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:42.312 00:12:42.312 --- 10.0.0.1 ping statistics --- 00:12:42.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.312 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:42.312 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=177428 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 177428 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 177428 ']' 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.313 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.313 [2024-07-11 10:58:56.554865] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:12:42.313 [2024-07-11 10:58:56.554937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.313 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.313 [2024-07-11 10:58:56.617588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.313 [2024-07-11 10:58:56.704973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.313 [2024-07-11 10:58:56.705030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.313 [2024-07-11 10:58:56.705058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.313 [2024-07-11 10:58:56.705070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.313 [2024-07-11 10:58:56.705080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.313 [2024-07-11 10:58:56.705233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.313 [2024-07-11 10:58:56.705298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.313 [2024-07-11 10:58:56.705351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.313 [2024-07-11 10:58:56.705354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:42.573 10:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:42.832 "nvmf_tgt_1" 00:12:42.832 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:42.832 "nvmf_tgt_2" 00:12:42.832 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.832 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:43.091 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:43.091 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:43.091 true 00:12:43.091 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:43.350 true 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.350 rmmod nvme_tcp 00:12:43.350 rmmod nvme_fabrics 00:12:43.350 rmmod nvme_keyring 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:43.350 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 177428 ']' 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 177428 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 177428 ']' 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 177428 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177428 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177428' 00:12:43.351 killing process with pid 177428 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 177428 00:12:43.351 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 177428 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.612 10:58:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.161 10:58:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.161 00:12:46.161 real 0m5.722s 00:12:46.161 user 0m6.437s 00:12:46.161 sys 0m1.923s 00:12:46.161 10:58:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.161 10:58:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.161 ************************************ 00:12:46.161 END TEST nvmf_multitarget 00:12:46.161 ************************************ 00:12:46.161 10:59:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:46.161 10:59:00 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.161 10:59:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:46.161 10:59:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.161 10:59:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.161 ************************************ 00:12:46.161 START TEST nvmf_rpc 00:12:46.161 ************************************ 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:46.161 * Looking for test storage... 00:12:46.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.161 10:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:48.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:48.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:48.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:48.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.075 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:12:48.076 00:12:48.076 --- 10.0.0.2 ping statistics --- 00:12:48.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.076 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:12:48.076 00:12:48.076 --- 10.0.0.1 ping statistics --- 00:12:48.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.076 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=179528 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 179528 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 179528 ']' 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.076 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.076 [2024-07-11 10:59:02.394981] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:12:48.076 [2024-07-11 10:59:02.395057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.076 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.076 [2024-07-11 10:59:02.460954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.335 [2024-07-11 10:59:02.544514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.335 [2024-07-11 10:59:02.544572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.335 [2024-07-11 10:59:02.544599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.335 [2024-07-11 10:59:02.544610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.335 [2024-07-11 10:59:02.544619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.335 [2024-07-11 10:59:02.544807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.335 [2024-07-11 10:59:02.544866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.335 [2024-07-11 10:59:02.544973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.335 [2024-07-11 10:59:02.544976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:48.335 "tick_rate": 2700000000, 00:12:48.335 "poll_groups": [ 00:12:48.335 { 00:12:48.335 "name": "nvmf_tgt_poll_group_000", 00:12:48.335 "admin_qpairs": 0, 00:12:48.335 "io_qpairs": 0, 00:12:48.335 "current_admin_qpairs": 0, 00:12:48.335 "current_io_qpairs": 0, 00:12:48.335 "pending_bdev_io": 0, 00:12:48.335 "completed_nvme_io": 0, 00:12:48.335 "transports": [] 00:12:48.335 }, 00:12:48.335 { 00:12:48.335 "name": "nvmf_tgt_poll_group_001", 00:12:48.335 "admin_qpairs": 0, 00:12:48.335 "io_qpairs": 0, 00:12:48.335 "current_admin_qpairs": 0, 00:12:48.335 "current_io_qpairs": 0, 00:12:48.335 "pending_bdev_io": 0, 00:12:48.335 "completed_nvme_io": 0, 00:12:48.335 "transports": [] 00:12:48.335 }, 00:12:48.335 { 00:12:48.335 "name": "nvmf_tgt_poll_group_002", 00:12:48.335 "admin_qpairs": 0, 00:12:48.335 "io_qpairs": 0, 00:12:48.335 "current_admin_qpairs": 0, 00:12:48.335 "current_io_qpairs": 0, 00:12:48.335 "pending_bdev_io": 0, 00:12:48.335 "completed_nvme_io": 0, 00:12:48.335 "transports": [] 00:12:48.335 }, 00:12:48.335 { 00:12:48.335 "name": "nvmf_tgt_poll_group_003", 00:12:48.335 "admin_qpairs": 0, 00:12:48.335 "io_qpairs": 0, 00:12:48.335 "current_admin_qpairs": 0, 00:12:48.335 "current_io_qpairs": 0, 00:12:48.335 "pending_bdev_io": 0, 00:12:48.335 "completed_nvme_io": 0, 00:12:48.335 "transports": [] 00:12:48.335 } 00:12:48.335 ] 00:12:48.335 }' 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:48.335 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:48.594 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:48.594 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.594 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.594 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.594 [2024-07-11 10:59:02.800997] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.594 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:48.595 "tick_rate": 2700000000, 00:12:48.595 "poll_groups": [ 00:12:48.595 { 00:12:48.595 "name": "nvmf_tgt_poll_group_000", 00:12:48.595 "admin_qpairs": 0, 00:12:48.595 "io_qpairs": 0, 00:12:48.595 "current_admin_qpairs": 0, 00:12:48.595 "current_io_qpairs": 0, 00:12:48.595 "pending_bdev_io": 0, 00:12:48.595 "completed_nvme_io": 0, 00:12:48.595 "transports": [ 00:12:48.595 { 00:12:48.595 "trtype": "TCP" 00:12:48.595 } 00:12:48.595 ] 00:12:48.595 }, 00:12:48.595 { 00:12:48.595 "name": "nvmf_tgt_poll_group_001", 00:12:48.595 "admin_qpairs": 0, 00:12:48.595 "io_qpairs": 0, 00:12:48.595 "current_admin_qpairs": 0, 00:12:48.595 "current_io_qpairs": 0, 00:12:48.595 "pending_bdev_io": 0, 00:12:48.595 "completed_nvme_io": 0, 00:12:48.595 "transports": [ 00:12:48.595 { 00:12:48.595 "trtype": "TCP" 00:12:48.595 } 00:12:48.595 ] 00:12:48.595 }, 00:12:48.595 { 00:12:48.595 "name": "nvmf_tgt_poll_group_002", 00:12:48.595 "admin_qpairs": 0, 00:12:48.595 "io_qpairs": 0, 00:12:48.595 "current_admin_qpairs": 0, 00:12:48.595 "current_io_qpairs": 0, 00:12:48.595 "pending_bdev_io": 0, 00:12:48.595 "completed_nvme_io": 0, 00:12:48.595 "transports": [ 00:12:48.595 { 00:12:48.595 "trtype": "TCP" 00:12:48.595 } 00:12:48.595 ] 00:12:48.595 }, 00:12:48.595 { 00:12:48.595 "name": "nvmf_tgt_poll_group_003", 00:12:48.595 "admin_qpairs": 0, 00:12:48.595 "io_qpairs": 0, 00:12:48.595 "current_admin_qpairs": 0, 00:12:48.595 "current_io_qpairs": 0, 00:12:48.595 "pending_bdev_io": 0, 00:12:48.595 "completed_nvme_io": 0, 00:12:48.595 "transports": [ 00:12:48.595 { 00:12:48.595 "trtype": "TCP" 00:12:48.595 } 00:12:48.595 ] 00:12:48.595 } 00:12:48.595 ] 00:12:48.595 }' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 Malloc1 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 [2024-07-11 10:59:02.962296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:48.595 10:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:48.595 [2024-07-11 10:59:02.984821] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:48.595 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:48.595 could not add new controller: failed to write to nvme-fabrics device 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.595 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.854 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.854 10:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.423 10:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.423 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.423 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.423 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.423 10:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.337 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:51.338 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.597 [2024-07-11 10:59:05.763663] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:51.597 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.597 could not add new controller: failed to write to nvme-fabrics device 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.597 10:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.165 10:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.165 10:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.165 10:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.165 10:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:52.165 10:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:54.079 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.338 [2024-07-11 10:59:08.539563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.338 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.339 10:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.907 10:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.908 10:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.908 10:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.908 10:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:54.908 10:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.820 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.078 [2024-07-11 10:59:11.357072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.078 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.645 10:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.645 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.645 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.645 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:57.645 10:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:59.558 10:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 [2024-07-11 10:59:14.076012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.815 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.816 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.816 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.816 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.382 10:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.382 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.383 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.383 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.383 10:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:02.288 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.547 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.548 [2024-07-11 10:59:16.850878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.548 10:59:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.488 10:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.489 10:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.489 10:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.489 10:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.489 10:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 [2024-07-11 10:59:19.713232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.398 10:59:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.335 10:59:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.335 10:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.335 10:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.335 10:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.335 10:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 [2024-07-11 10:59:22.518411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 [2024-07-11 10:59:22.566482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.250 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 [2024-07-11 10:59:22.614632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 [2024-07-11 10:59:22.662835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.251 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.510 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 [2024-07-11 10:59:22.711023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:08.511 "tick_rate": 2700000000, 00:13:08.511 "poll_groups": [ 00:13:08.511 { 00:13:08.511 "name": "nvmf_tgt_poll_group_000", 00:13:08.511 "admin_qpairs": 2, 00:13:08.511 "io_qpairs": 84, 00:13:08.511 "current_admin_qpairs": 0, 00:13:08.511 "current_io_qpairs": 0, 00:13:08.511 "pending_bdev_io": 0, 00:13:08.511 "completed_nvme_io": 121, 00:13:08.511 "transports": [ 00:13:08.511 { 00:13:08.511 "trtype": "TCP" 00:13:08.511 } 00:13:08.511 ] 00:13:08.511 }, 00:13:08.511 { 00:13:08.511 "name": "nvmf_tgt_poll_group_001", 00:13:08.511 "admin_qpairs": 2, 00:13:08.511 "io_qpairs": 84, 00:13:08.511 "current_admin_qpairs": 0, 00:13:08.511 "current_io_qpairs": 0, 00:13:08.511 "pending_bdev_io": 0, 00:13:08.511 "completed_nvme_io": 195, 00:13:08.511 "transports": [ 00:13:08.511 { 00:13:08.511 "trtype": "TCP" 00:13:08.511 } 00:13:08.511 ] 00:13:08.511 }, 00:13:08.511 { 00:13:08.511 "name": "nvmf_tgt_poll_group_002", 00:13:08.511 "admin_qpairs": 1, 00:13:08.511 "io_qpairs": 84, 00:13:08.511 "current_admin_qpairs": 0, 00:13:08.511 "current_io_qpairs": 0, 00:13:08.511 "pending_bdev_io": 0, 00:13:08.511 "completed_nvme_io": 159, 00:13:08.511 "transports": [ 00:13:08.511 { 00:13:08.511 "trtype": "TCP" 00:13:08.511 } 00:13:08.511 ] 00:13:08.511 }, 00:13:08.511 { 00:13:08.511 "name": "nvmf_tgt_poll_group_003", 00:13:08.511 "admin_qpairs": 2, 00:13:08.511 "io_qpairs": 84, 00:13:08.511 "current_admin_qpairs": 0, 00:13:08.511 "current_io_qpairs": 0, 00:13:08.511 "pending_bdev_io": 0, 00:13:08.511 "completed_nvme_io": 211, 00:13:08.511 "transports": [ 00:13:08.511 { 00:13:08.511 "trtype": "TCP" 00:13:08.511 } 00:13:08.511 ] 00:13:08.511 } 00:13:08.511 ] 00:13:08.511 }' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.511 rmmod nvme_tcp 00:13:08.511 rmmod nvme_fabrics 00:13:08.511 rmmod nvme_keyring 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.511 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 179528 ']' 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 179528 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 179528 ']' 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 179528 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179528 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179528' 00:13:08.512 killing process with pid 179528 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 179528 00:13:08.512 10:59:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 179528 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.772 10:59:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.324 10:59:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.324 00:13:11.324 real 0m25.158s 00:13:11.324 user 1m21.493s 00:13:11.324 sys 0m4.248s 00:13:11.324 10:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.324 10:59:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.324 ************************************ 00:13:11.324 END TEST nvmf_rpc 00:13:11.324 ************************************ 00:13:11.324 10:59:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:11.324 10:59:25 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.324 10:59:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:11.324 10:59:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.324 10:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.324 ************************************ 00:13:11.324 START TEST nvmf_invalid 00:13:11.324 ************************************ 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.324 * Looking for test storage... 00:13:11.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.324 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.325 10:59:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:13.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:13.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.236 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:13.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:13.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:13:13.237 00:13:13.237 --- 10.0.0.2 ping statistics --- 00:13:13.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.237 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:13:13.237 00:13:13.237 --- 10.0.0.1 ping statistics --- 00:13:13.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.237 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=184064 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 184064 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 184064 ']' 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.237 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.237 [2024-07-11 10:59:27.601373] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:13:13.237 [2024-07-11 10:59:27.601448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.237 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.496 [2024-07-11 10:59:27.665218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.496 [2024-07-11 10:59:27.746575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.496 [2024-07-11 10:59:27.746625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.496 [2024-07-11 10:59:27.746647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.496 [2024-07-11 10:59:27.746657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.496 [2024-07-11 10:59:27.746667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.496 [2024-07-11 10:59:27.746713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.496 [2024-07-11 10:59:27.746837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.496 [2024-07-11 10:59:27.746864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.496 [2024-07-11 10:59:27.746867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:13.496 10:59:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24325 00:13:13.755 [2024-07-11 10:59:28.163405] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:14.013 { 00:13:14.013 "nqn": "nqn.2016-06.io.spdk:cnode24325", 00:13:14.013 "tgt_name": "foobar", 00:13:14.013 "method": "nvmf_create_subsystem", 00:13:14.013 "req_id": 1 00:13:14.013 } 00:13:14.013 Got JSON-RPC error response 00:13:14.013 response: 00:13:14.013 { 00:13:14.013 "code": -32603, 00:13:14.013 "message": "Unable to find target foobar" 00:13:14.013 }' 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:14.013 { 00:13:14.013 "nqn": "nqn.2016-06.io.spdk:cnode24325", 00:13:14.013 "tgt_name": "foobar", 00:13:14.013 "method": "nvmf_create_subsystem", 00:13:14.013 "req_id": 1 00:13:14.013 } 00:13:14.013 Got JSON-RPC error response 00:13:14.013 response: 00:13:14.013 { 00:13:14.013 "code": -32603, 00:13:14.013 "message": "Unable to find target foobar" 00:13:14.013 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12309 00:13:14.013 [2024-07-11 10:59:28.408257] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12309: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:14.013 { 00:13:14.013 "nqn": "nqn.2016-06.io.spdk:cnode12309", 00:13:14.013 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:14.013 "method": "nvmf_create_subsystem", 00:13:14.013 "req_id": 1 00:13:14.013 } 00:13:14.013 Got JSON-RPC error response 00:13:14.013 response: 00:13:14.013 { 00:13:14.013 "code": -32602, 00:13:14.013 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:14.013 }' 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:14.013 { 00:13:14.013 "nqn": "nqn.2016-06.io.spdk:cnode12309", 00:13:14.013 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:14.013 "method": "nvmf_create_subsystem", 00:13:14.013 "req_id": 1 00:13:14.013 } 00:13:14.013 Got JSON-RPC error response 00:13:14.013 response: 00:13:14.013 { 00:13:14.013 "code": -32602, 00:13:14.013 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:14.013 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:14.013 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29730 00:13:14.273 [2024-07-11 10:59:28.661138] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29730: invalid model number 'SPDK_Controller' 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:14.273 { 00:13:14.273 "nqn": "nqn.2016-06.io.spdk:cnode29730", 00:13:14.273 "model_number": "SPDK_Controller\u001f", 00:13:14.273 "method": "nvmf_create_subsystem", 00:13:14.273 "req_id": 1 00:13:14.273 } 00:13:14.273 Got JSON-RPC error response 00:13:14.273 response: 00:13:14.273 { 00:13:14.273 "code": -32602, 00:13:14.273 "message": "Invalid MN SPDK_Controller\u001f" 00:13:14.273 }' 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:14.273 { 00:13:14.273 "nqn": "nqn.2016-06.io.spdk:cnode29730", 00:13:14.273 "model_number": "SPDK_Controller\u001f", 00:13:14.273 "method": "nvmf_create_subsystem", 00:13:14.273 "req_id": 1 00:13:14.273 } 00:13:14.273 Got JSON-RPC error response 00:13:14.273 response: 00:13:14.273 { 00:13:14.273 "code": -32602, 00:13:14.273 "message": "Invalid MN SPDK_Controller\u001f" 00:13:14.273 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.273 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:14.533 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ' QMr$X~fh#m-!MH~SD2m0' 00:13:14.534 10:59:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ' QMr$X~fh#m-!MH~SD2m0' nqn.2016-06.io.spdk:cnode24013 00:13:14.793 [2024-07-11 10:59:28.982197] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24013: invalid serial number ' QMr$X~fh#m-!MH~SD2m0' 00:13:14.793 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:14.793 { 00:13:14.793 "nqn": "nqn.2016-06.io.spdk:cnode24013", 00:13:14.793 "serial_number": " QMr$X~fh#m-!MH~SD2m0", 00:13:14.793 "method": "nvmf_create_subsystem", 00:13:14.793 "req_id": 1 00:13:14.793 } 00:13:14.793 Got JSON-RPC error response 00:13:14.793 response: 00:13:14.793 { 00:13:14.793 "code": -32602, 00:13:14.793 "message": "Invalid SN QMr$X~fh#m-!MH~SD2m0" 00:13:14.793 }' 00:13:14.793 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:14.793 { 00:13:14.793 "nqn": "nqn.2016-06.io.spdk:cnode24013", 00:13:14.793 "serial_number": " QMr$X~fh#m-!MH~SD2m0", 00:13:14.793 "method": "nvmf_create_subsystem", 00:13:14.793 "req_id": 1 00:13:14.793 } 00:13:14.793 Got JSON-RPC error response 00:13:14.793 response: 00:13:14.793 { 00:13:14.793 "code": -32602, 00:13:14.793 "message": "Invalid SN QMr$X~fh#m-!MH~SD2m0" 00:13:14.793 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:14.793 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:14.793 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:14.794 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:13:14.795 10:59:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ' /dev/null' 00:13:17.644 10:59:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.558 10:59:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.558 00:13:19.558 real 0m8.650s 00:13:19.558 user 0m20.055s 00:13:19.558 sys 0m2.443s 00:13:19.558 10:59:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.558 10:59:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.558 ************************************ 00:13:19.558 END TEST nvmf_invalid 00:13:19.558 ************************************ 00:13:19.558 10:59:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:19.558 10:59:33 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:19.558 10:59:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.558 10:59:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.558 10:59:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.558 ************************************ 00:13:19.558 START TEST nvmf_abort 00:13:19.558 ************************************ 00:13:19.558 10:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:19.817 * Looking for test storage... 00:13:19.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.818 10:59:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.728 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:21.729 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:21.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:21.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:21.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.729 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:13:21.988 00:13:21.988 --- 10.0.0.2 ping statistics --- 00:13:21.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.988 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:21.988 00:13:21.988 --- 10.0.0.1 ping statistics --- 00:13:21.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.988 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=186631 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 186631 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 186631 ']' 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.988 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.988 [2024-07-11 10:59:36.311301] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:13:21.988 [2024-07-11 10:59:36.311386] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.989 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.989 [2024-07-11 10:59:36.375247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.248 [2024-07-11 10:59:36.462885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.248 [2024-07-11 10:59:36.462936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.248 [2024-07-11 10:59:36.462964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.248 [2024-07-11 10:59:36.462975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.248 [2024-07-11 10:59:36.462985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.248 [2024-07-11 10:59:36.463071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.248 [2024-07-11 10:59:36.463136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.248 [2024-07-11 10:59:36.463138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.248 [2024-07-11 10:59:36.603622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.248 Malloc0 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.248 Delay0 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.248 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.509 [2024-07-11 10:59:36.676147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.509 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.509 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:22.509 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.509 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:22.509 10:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.509 10:59:36 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:22.509 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.509 [2024-07-11 10:59:36.823903] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:25.053 Initializing NVMe Controllers 00:13:25.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:25.053 controller IO queue size 128 less than required 00:13:25.053 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:25.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:25.053 Initialization complete. Launching workers. 00:13:25.053 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 32753 00:13:25.053 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32818, failed to submit 62 00:13:25.053 success 32757, unsuccess 61, failed 0 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.053 10:59:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.053 rmmod nvme_tcp 00:13:25.053 rmmod nvme_fabrics 00:13:25.053 rmmod nvme_keyring 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 186631 ']' 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 186631 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 186631 ']' 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 186631 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 186631 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 186631' 00:13:25.053 killing process with pid 186631 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 186631 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 186631 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.053 10:59:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.054 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.054 10:59:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.964 10:59:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:26.964 00:13:26.964 real 0m7.441s 00:13:26.964 user 0m11.099s 00:13:26.964 sys 0m2.443s 00:13:26.964 10:59:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.964 10:59:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:26.964 ************************************ 00:13:26.964 END TEST nvmf_abort 00:13:26.964 ************************************ 00:13:27.224 10:59:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.224 10:59:41 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:27.224 10:59:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.224 10:59:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.224 10:59:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.224 ************************************ 00:13:27.224 START TEST nvmf_ns_hotplug_stress 00:13:27.224 ************************************ 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:27.224 * Looking for test storage... 00:13:27.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:27.224 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.225 10:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.133 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:29.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:29.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:29.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.392 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:29.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:13:29.393 00:13:29.393 --- 10.0.0.2 ping statistics --- 00:13:29.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.393 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:13:29.393 00:13:29.393 --- 10.0.0.1 ping statistics --- 00:13:29.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.393 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=188980 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 188980 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 188980 ']' 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.393 10:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.393 [2024-07-11 10:59:43.760305] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:13:29.393 [2024-07-11 10:59:43.760376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.393 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.651 [2024-07-11 10:59:43.825173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.651 [2024-07-11 10:59:43.909596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.651 [2024-07-11 10:59:43.909644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.651 [2024-07-11 10:59:43.909671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.651 [2024-07-11 10:59:43.909683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.651 [2024-07-11 10:59:43.909693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.651 [2024-07-11 10:59:43.909843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.651 [2024-07-11 10:59:43.909894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.651 [2024-07-11 10:59:43.909897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:29.651 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.912 [2024-07-11 10:59:44.323118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.171 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.428 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.429 [2024-07-11 10:59:44.849901] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.689 10:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.948 10:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:31.206 Malloc0 00:13:31.206 10:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:31.464 Delay0 00:13:31.464 10:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.722 10:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:31.980 NULL1 00:13:31.980 10:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:32.238 10:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=189281 00:13:32.238 10:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:32.238 10:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:32.238 10:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.238 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.497 10:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.755 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:32.755 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:33.013 true 00:13:33.013 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:33.014 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.272 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.530 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:33.530 10:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:33.788 true 00:13:33.788 10:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:33.788 10:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.729 Read completed with error (sct=0, sc=11) 00:13:34.729 10:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.988 10:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:34.988 10:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:35.249 true 00:13:35.249 10:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:35.249 10:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.506 10:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.767 10:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:35.767 10:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:36.026 true 00:13:36.026 10:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:36.026 10:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.968 10:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.227 10:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:37.227 10:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:37.486 true 00:13:37.486 10:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:37.486 10:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.745 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.004 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:38.004 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:38.264 true 00:13:38.264 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:38.264 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.524 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.782 10:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:38.782 10:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:39.041 true 00:13:39.041 10:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:39.041 10:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.983 10:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.242 10:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:40.242 10:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:40.501 true 00:13:40.501 10:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:40.501 10:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.441 10:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.441 10:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:41.441 10:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:41.699 true 00:13:41.699 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:41.699 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.958 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.216 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:42.216 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:42.474 true 00:13:42.474 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:42.474 10:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.412 10:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.670 10:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:43.670 10:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:43.929 true 00:13:43.929 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:43.929 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.188 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.446 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:44.446 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:44.704 true 00:13:44.704 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:44.704 10:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.642 10:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.901 11:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:45.901 11:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:46.160 true 00:13:46.160 11:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:46.160 11:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.419 11:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.678 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:46.678 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:46.936 true 00:13:46.936 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:46.936 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.193 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.451 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:47.451 11:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:47.708 true 00:13:47.708 11:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:47.708 11:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.083 11:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.083 11:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:49.083 11:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:49.351 true 00:13:49.351 11:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:49.351 11:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.609 11:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.868 11:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:49.868 11:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:50.127 true 00:13:50.127 11:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:50.127 11:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.077 11:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.335 11:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:51.335 11:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:51.595 true 00:13:51.595 11:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:51.595 11:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.852 11:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.111 11:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:52.111 11:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:52.370 true 00:13:52.370 11:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:52.370 11:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.307 11:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.565 11:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:53.565 11:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:53.823 true 00:13:53.823 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:53.823 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.082 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.341 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:54.341 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:54.598 true 00:13:54.598 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:54.598 11:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.854 11:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.111 11:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:55.111 11:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:55.367 true 00:13:55.367 11:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:55.367 11:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.463 11:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.720 11:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:56.720 11:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:56.977 true 00:13:56.977 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:56.977 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.235 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.492 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:57.493 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:57.752 true 00:13:57.752 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:57.752 11:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.689 11:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.689 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:58.689 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:58.947 true 00:13:58.947 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:58.947 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.206 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.463 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:59.463 11:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:59.720 true 00:13:59.720 11:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:13:59.720 11:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.655 11:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.912 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:00.913 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:01.171 true 00:14:01.171 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:14:01.171 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.430 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.689 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:01.689 11:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:01.948 true 00:14:01.948 11:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:14:01.948 11:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.887 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.887 Initializing NVMe Controllers 00:14:02.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:02.887 Controller IO queue size 128, less than required. 00:14:02.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:02.887 Controller IO queue size 128, less than required. 00:14:02.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:02.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:02.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:02.887 Initialization complete. Launching workers. 00:14:02.887 ======================================================== 00:14:02.887 Latency(us) 00:14:02.887 Device Information : IOPS MiB/s Average min max 00:14:02.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 858.90 0.42 72967.71 3037.25 1027315.38 00:14:02.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10228.67 4.99 12476.71 3144.19 543228.39 00:14:02.887 ======================================================== 00:14:02.887 Total : 11087.57 5.41 17162.65 3037.25 1027315.38 00:14:02.887 00:14:02.887 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:02.887 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:03.146 true 00:14:03.146 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 189281 00:14:03.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (189281) - No such process 00:14:03.146 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 189281 00:14:03.146 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.404 11:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.662 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:03.662 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:03.662 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:03.662 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.662 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:03.925 null0 00:14:03.925 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.925 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.925 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:04.183 null1 00:14:04.183 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.183 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.183 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:04.441 null2 00:14:04.441 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.441 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.441 11:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:04.700 null3 00:14:04.700 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.700 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.700 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:04.958 null4 00:14:04.958 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.958 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.958 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:05.216 null5 00:14:05.216 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.216 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.216 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:05.474 null6 00:14:05.474 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.474 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.474 11:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:05.732 null7 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:05.732 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 193936 193937 193939 193941 193943 193945 193947 193949 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.733 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.992 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.251 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.510 11:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.768 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.769 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.769 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.769 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.769 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.027 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.285 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.543 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.543 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.543 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.543 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.543 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.802 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.802 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.802 11:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.802 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.802 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.802 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.061 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.320 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.579 11:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.837 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.095 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.351 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.351 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.352 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.352 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.352 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.352 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.352 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.352 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.609 11:00:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.867 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.125 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.383 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.642 11:00:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.901 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.160 rmmod nvme_tcp 00:14:11.160 rmmod nvme_fabrics 00:14:11.160 rmmod nvme_keyring 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 188980 ']' 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 188980 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 188980 ']' 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 188980 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.160 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 188980 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 188980' 00:14:11.419 killing process with pid 188980 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 188980 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 188980 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.419 11:00:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.959 11:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.959 00:14:13.959 real 0m46.405s 00:14:13.959 user 3m32.205s 00:14:13.959 sys 0m15.936s 00:14:13.959 11:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.959 11:00:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.959 ************************************ 00:14:13.959 END TEST nvmf_ns_hotplug_stress 00:14:13.959 ************************************ 00:14:13.959 11:00:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:13.959 11:00:27 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:13.959 11:00:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.959 11:00:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.959 11:00:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.959 ************************************ 00:14:13.959 START TEST nvmf_connect_stress 00:14:13.959 ************************************ 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:13.959 * Looking for test storage... 00:14:13.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.959 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.960 11:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.860 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:15.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:15.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:15.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:15.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:14:15.861 00:14:15.861 --- 10.0.0.2 ping statistics --- 00:14:15.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.861 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:15.861 00:14:15.861 --- 10.0.0.1 ping statistics --- 00:14:15.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.861 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.861 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=196692 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 196692 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 196692 ']' 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.862 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.862 [2024-07-11 11:00:30.273390] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:14:15.862 [2024-07-11 11:00:30.273474] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.120 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.120 [2024-07-11 11:00:30.339923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.120 [2024-07-11 11:00:30.427837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.120 [2024-07-11 11:00:30.427894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.120 [2024-07-11 11:00:30.427923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.120 [2024-07-11 11:00:30.427935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.120 [2024-07-11 11:00:30.427945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.120 [2024-07-11 11:00:30.427998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.120 [2024-07-11 11:00:30.428060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.120 [2024-07-11 11:00:30.428063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.120 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.120 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:16.120 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.120 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.120 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 [2024-07-11 11:00:30.562184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 [2024-07-11 11:00:30.591894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.379 NULL1 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=196835 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.379 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.645 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.645 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:16.645 11:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.645 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.645 11:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.903 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.903 11:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:16.903 11:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.903 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.903 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.468 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.468 11:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:17.468 11:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.468 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.468 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.725 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.725 11:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:17.725 11:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.725 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.725 11:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.984 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.984 11:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:17.984 11:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.984 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.984 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.242 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.242 11:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:18.242 11:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.242 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.242 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.500 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.500 11:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:18.500 11:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.500 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.500 11:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.070 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.070 11:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:19.070 11:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.070 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.070 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.330 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.330 11:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:19.330 11:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.330 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.330 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.589 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.589 11:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:19.589 11:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.589 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.589 11:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.849 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.849 11:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:19.849 11:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.849 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.849 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.110 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.110 11:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:20.110 11:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.110 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.110 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.703 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.703 11:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:20.703 11:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.703 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.703 11:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.961 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.961 11:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:20.961 11:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.961 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.961 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.220 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.220 11:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:21.220 11:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.220 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.220 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.480 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.480 11:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:21.480 11:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.480 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.480 11:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.737 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.737 11:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:21.737 11:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.737 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.737 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.303 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.303 11:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:22.303 11:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.303 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.303 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.561 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.561 11:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:22.561 11:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.561 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.561 11:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.819 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.819 11:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:22.819 11:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.819 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.819 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.077 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.077 11:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:23.077 11:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.077 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.077 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.336 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.336 11:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:23.336 11:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.336 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.336 11:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.902 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.902 11:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:23.902 11:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.902 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.902 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.161 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.162 11:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:24.162 11:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.162 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.162 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.422 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.422 11:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:24.422 11:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.422 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.422 11:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.682 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.682 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:24.682 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.682 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.682 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.940 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.940 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:24.940 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.940 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.940 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.507 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:25.507 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.507 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.507 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.767 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.767 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:25.767 11:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.767 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.767 11:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.027 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.027 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:26.027 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.027 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.027 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.286 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:26.286 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.286 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.286 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.286 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 196835 00:14:26.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (196835) - No such process 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 196835 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.544 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.544 rmmod nvme_tcp 00:14:26.544 rmmod nvme_fabrics 00:14:26.804 rmmod nvme_keyring 00:14:26.804 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.804 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:26.804 11:00:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 196692 ']' 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 196692 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 196692 ']' 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 196692 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 196692 00:14:26.804 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.805 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.805 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 196692' 00:14:26.805 killing process with pid 196692 00:14:26.805 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 196692 00:14:26.805 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 196692 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.065 11:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.968 11:00:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.968 00:14:28.968 real 0m15.398s 00:14:28.968 user 0m39.676s 00:14:28.968 sys 0m4.642s 00:14:28.968 11:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.968 11:00:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.968 ************************************ 00:14:28.968 END TEST nvmf_connect_stress 00:14:28.968 ************************************ 00:14:28.968 11:00:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:28.968 11:00:43 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:28.968 11:00:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:28.968 11:00:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.968 11:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.968 ************************************ 00:14:28.968 START TEST nvmf_fused_ordering 00:14:28.968 ************************************ 00:14:28.968 11:00:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:28.968 * Looking for test storage... 00:14:29.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.226 11:00:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.126 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:31.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:31.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:31.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:31.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.127 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:14:31.387 00:14:31.387 --- 10.0.0.2 ping statistics --- 00:14:31.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.387 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:31.387 00:14:31.387 --- 10.0.0.1 ping statistics --- 00:14:31.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.387 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=199982 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 199982 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 199982 ']' 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.387 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.387 [2024-07-11 11:00:45.662984] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:14:31.387 [2024-07-11 11:00:45.663083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.387 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.387 [2024-07-11 11:00:45.727011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.647 [2024-07-11 11:00:45.815765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.647 [2024-07-11 11:00:45.815825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.647 [2024-07-11 11:00:45.815853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.647 [2024-07-11 11:00:45.815866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.647 [2024-07-11 11:00:45.815876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.647 [2024-07-11 11:00:45.815904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 [2024-07-11 11:00:45.957161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 [2024-07-11 11:00:45.973314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 NULL1 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:31.647 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.648 11:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 11:00:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.648 11:00:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:31.648 [2024-07-11 11:00:46.015436] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:14:31.648 [2024-07-11 11:00:46.015470] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200002 ] 00:14:31.648 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.215 Attached to nqn.2016-06.io.spdk:cnode1 00:14:32.215 Namespace ID: 1 size: 1GB 00:14:32.215 fused_ordering(0) 00:14:32.215 fused_ordering(1) 00:14:32.215 fused_ordering(2) 00:14:32.215 fused_ordering(3) 00:14:32.215 fused_ordering(4) 00:14:32.215 fused_ordering(5) 00:14:32.215 fused_ordering(6) 00:14:32.215 fused_ordering(7) 00:14:32.215 fused_ordering(8) 00:14:32.215 fused_ordering(9) 00:14:32.215 fused_ordering(10) 00:14:32.215 fused_ordering(11) 00:14:32.215 fused_ordering(12) 00:14:32.215 fused_ordering(13) 00:14:32.215 fused_ordering(14) 00:14:32.215 fused_ordering(15) 00:14:32.215 fused_ordering(16) 00:14:32.215 fused_ordering(17) 00:14:32.215 fused_ordering(18) 00:14:32.215 fused_ordering(19) 00:14:32.215 fused_ordering(20) 00:14:32.215 fused_ordering(21) 00:14:32.215 fused_ordering(22) 00:14:32.215 fused_ordering(23) 00:14:32.215 fused_ordering(24) 00:14:32.215 fused_ordering(25) 00:14:32.215 fused_ordering(26) 00:14:32.215 fused_ordering(27) 00:14:32.215 fused_ordering(28) 00:14:32.215 fused_ordering(29) 00:14:32.215 fused_ordering(30) 00:14:32.215 fused_ordering(31) 00:14:32.215 fused_ordering(32) 00:14:32.215 fused_ordering(33) 00:14:32.215 fused_ordering(34) 00:14:32.215 fused_ordering(35) 00:14:32.215 fused_ordering(36) 00:14:32.215 fused_ordering(37) 00:14:32.215 fused_ordering(38) 00:14:32.215 fused_ordering(39) 00:14:32.215 fused_ordering(40) 00:14:32.215 fused_ordering(41) 00:14:32.215 fused_ordering(42) 00:14:32.215 fused_ordering(43) 00:14:32.215 fused_ordering(44) 00:14:32.215 fused_ordering(45) 00:14:32.215 fused_ordering(46) 00:14:32.215 fused_ordering(47) 00:14:32.215 fused_ordering(48) 00:14:32.215 fused_ordering(49) 00:14:32.215 fused_ordering(50) 00:14:32.215 fused_ordering(51) 00:14:32.215 fused_ordering(52) 00:14:32.215 fused_ordering(53) 00:14:32.215 fused_ordering(54) 00:14:32.215 fused_ordering(55) 00:14:32.215 fused_ordering(56) 00:14:32.215 fused_ordering(57) 00:14:32.215 fused_ordering(58) 00:14:32.215 fused_ordering(59) 00:14:32.215 fused_ordering(60) 00:14:32.215 fused_ordering(61) 00:14:32.215 fused_ordering(62) 00:14:32.215 fused_ordering(63) 00:14:32.215 fused_ordering(64) 00:14:32.215 fused_ordering(65) 00:14:32.215 fused_ordering(66) 00:14:32.215 fused_ordering(67) 00:14:32.215 fused_ordering(68) 00:14:32.215 fused_ordering(69) 00:14:32.215 fused_ordering(70) 00:14:32.215 fused_ordering(71) 00:14:32.215 fused_ordering(72) 00:14:32.215 fused_ordering(73) 00:14:32.215 fused_ordering(74) 00:14:32.215 fused_ordering(75) 00:14:32.215 fused_ordering(76) 00:14:32.215 fused_ordering(77) 00:14:32.215 fused_ordering(78) 00:14:32.215 fused_ordering(79) 00:14:32.215 fused_ordering(80) 00:14:32.215 fused_ordering(81) 00:14:32.215 fused_ordering(82) 00:14:32.215 fused_ordering(83) 00:14:32.215 fused_ordering(84) 00:14:32.215 fused_ordering(85) 00:14:32.215 fused_ordering(86) 00:14:32.215 fused_ordering(87) 00:14:32.215 fused_ordering(88) 00:14:32.215 fused_ordering(89) 00:14:32.215 fused_ordering(90) 00:14:32.215 fused_ordering(91) 00:14:32.215 fused_ordering(92) 00:14:32.215 fused_ordering(93) 00:14:32.215 fused_ordering(94) 00:14:32.215 fused_ordering(95) 00:14:32.215 fused_ordering(96) 00:14:32.215 fused_ordering(97) 00:14:32.215 fused_ordering(98) 00:14:32.215 fused_ordering(99) 00:14:32.215 fused_ordering(100) 00:14:32.215 fused_ordering(101) 00:14:32.215 fused_ordering(102) 00:14:32.216 fused_ordering(103) 00:14:32.216 fused_ordering(104) 00:14:32.216 fused_ordering(105) 00:14:32.216 fused_ordering(106) 00:14:32.216 fused_ordering(107) 00:14:32.216 fused_ordering(108) 00:14:32.216 fused_ordering(109) 00:14:32.216 fused_ordering(110) 00:14:32.216 fused_ordering(111) 00:14:32.216 fused_ordering(112) 00:14:32.216 fused_ordering(113) 00:14:32.216 fused_ordering(114) 00:14:32.216 fused_ordering(115) 00:14:32.216 fused_ordering(116) 00:14:32.216 fused_ordering(117) 00:14:32.216 fused_ordering(118) 00:14:32.216 fused_ordering(119) 00:14:32.216 fused_ordering(120) 00:14:32.216 fused_ordering(121) 00:14:32.216 fused_ordering(122) 00:14:32.216 fused_ordering(123) 00:14:32.216 fused_ordering(124) 00:14:32.216 fused_ordering(125) 00:14:32.216 fused_ordering(126) 00:14:32.216 fused_ordering(127) 00:14:32.216 fused_ordering(128) 00:14:32.216 fused_ordering(129) 00:14:32.216 fused_ordering(130) 00:14:32.216 fused_ordering(131) 00:14:32.216 fused_ordering(132) 00:14:32.216 fused_ordering(133) 00:14:32.216 fused_ordering(134) 00:14:32.216 fused_ordering(135) 00:14:32.216 fused_ordering(136) 00:14:32.216 fused_ordering(137) 00:14:32.216 fused_ordering(138) 00:14:32.216 fused_ordering(139) 00:14:32.216 fused_ordering(140) 00:14:32.216 fused_ordering(141) 00:14:32.216 fused_ordering(142) 00:14:32.216 fused_ordering(143) 00:14:32.216 fused_ordering(144) 00:14:32.216 fused_ordering(145) 00:14:32.216 fused_ordering(146) 00:14:32.216 fused_ordering(147) 00:14:32.216 fused_ordering(148) 00:14:32.216 fused_ordering(149) 00:14:32.216 fused_ordering(150) 00:14:32.216 fused_ordering(151) 00:14:32.216 fused_ordering(152) 00:14:32.216 fused_ordering(153) 00:14:32.216 fused_ordering(154) 00:14:32.216 fused_ordering(155) 00:14:32.216 fused_ordering(156) 00:14:32.216 fused_ordering(157) 00:14:32.216 fused_ordering(158) 00:14:32.216 fused_ordering(159) 00:14:32.216 fused_ordering(160) 00:14:32.216 fused_ordering(161) 00:14:32.216 fused_ordering(162) 00:14:32.216 fused_ordering(163) 00:14:32.216 fused_ordering(164) 00:14:32.216 fused_ordering(165) 00:14:32.216 fused_ordering(166) 00:14:32.216 fused_ordering(167) 00:14:32.216 fused_ordering(168) 00:14:32.216 fused_ordering(169) 00:14:32.216 fused_ordering(170) 00:14:32.216 fused_ordering(171) 00:14:32.216 fused_ordering(172) 00:14:32.216 fused_ordering(173) 00:14:32.216 fused_ordering(174) 00:14:32.216 fused_ordering(175) 00:14:32.216 fused_ordering(176) 00:14:32.216 fused_ordering(177) 00:14:32.216 fused_ordering(178) 00:14:32.216 fused_ordering(179) 00:14:32.216 fused_ordering(180) 00:14:32.216 fused_ordering(181) 00:14:32.216 fused_ordering(182) 00:14:32.216 fused_ordering(183) 00:14:32.216 fused_ordering(184) 00:14:32.216 fused_ordering(185) 00:14:32.216 fused_ordering(186) 00:14:32.216 fused_ordering(187) 00:14:32.216 fused_ordering(188) 00:14:32.216 fused_ordering(189) 00:14:32.216 fused_ordering(190) 00:14:32.216 fused_ordering(191) 00:14:32.216 fused_ordering(192) 00:14:32.216 fused_ordering(193) 00:14:32.216 fused_ordering(194) 00:14:32.216 fused_ordering(195) 00:14:32.216 fused_ordering(196) 00:14:32.216 fused_ordering(197) 00:14:32.216 fused_ordering(198) 00:14:32.216 fused_ordering(199) 00:14:32.216 fused_ordering(200) 00:14:32.216 fused_ordering(201) 00:14:32.216 fused_ordering(202) 00:14:32.216 fused_ordering(203) 00:14:32.216 fused_ordering(204) 00:14:32.216 fused_ordering(205) 00:14:32.476 fused_ordering(206) 00:14:32.476 fused_ordering(207) 00:14:32.476 fused_ordering(208) 00:14:32.476 fused_ordering(209) 00:14:32.476 fused_ordering(210) 00:14:32.476 fused_ordering(211) 00:14:32.476 fused_ordering(212) 00:14:32.476 fused_ordering(213) 00:14:32.476 fused_ordering(214) 00:14:32.476 fused_ordering(215) 00:14:32.476 fused_ordering(216) 00:14:32.476 fused_ordering(217) 00:14:32.476 fused_ordering(218) 00:14:32.476 fused_ordering(219) 00:14:32.476 fused_ordering(220) 00:14:32.476 fused_ordering(221) 00:14:32.476 fused_ordering(222) 00:14:32.476 fused_ordering(223) 00:14:32.476 fused_ordering(224) 00:14:32.476 fused_ordering(225) 00:14:32.476 fused_ordering(226) 00:14:32.476 fused_ordering(227) 00:14:32.476 fused_ordering(228) 00:14:32.476 fused_ordering(229) 00:14:32.476 fused_ordering(230) 00:14:32.476 fused_ordering(231) 00:14:32.476 fused_ordering(232) 00:14:32.476 fused_ordering(233) 00:14:32.476 fused_ordering(234) 00:14:32.476 fused_ordering(235) 00:14:32.476 fused_ordering(236) 00:14:32.476 fused_ordering(237) 00:14:32.476 fused_ordering(238) 00:14:32.476 fused_ordering(239) 00:14:32.476 fused_ordering(240) 00:14:32.476 fused_ordering(241) 00:14:32.476 fused_ordering(242) 00:14:32.476 fused_ordering(243) 00:14:32.476 fused_ordering(244) 00:14:32.476 fused_ordering(245) 00:14:32.476 fused_ordering(246) 00:14:32.476 fused_ordering(247) 00:14:32.476 fused_ordering(248) 00:14:32.476 fused_ordering(249) 00:14:32.476 fused_ordering(250) 00:14:32.476 fused_ordering(251) 00:14:32.476 fused_ordering(252) 00:14:32.476 fused_ordering(253) 00:14:32.476 fused_ordering(254) 00:14:32.476 fused_ordering(255) 00:14:32.476 fused_ordering(256) 00:14:32.476 fused_ordering(257) 00:14:32.476 fused_ordering(258) 00:14:32.476 fused_ordering(259) 00:14:32.476 fused_ordering(260) 00:14:32.476 fused_ordering(261) 00:14:32.476 fused_ordering(262) 00:14:32.476 fused_ordering(263) 00:14:32.476 fused_ordering(264) 00:14:32.476 fused_ordering(265) 00:14:32.476 fused_ordering(266) 00:14:32.476 fused_ordering(267) 00:14:32.476 fused_ordering(268) 00:14:32.476 fused_ordering(269) 00:14:32.476 fused_ordering(270) 00:14:32.476 fused_ordering(271) 00:14:32.476 fused_ordering(272) 00:14:32.476 fused_ordering(273) 00:14:32.476 fused_ordering(274) 00:14:32.476 fused_ordering(275) 00:14:32.476 fused_ordering(276) 00:14:32.476 fused_ordering(277) 00:14:32.476 fused_ordering(278) 00:14:32.476 fused_ordering(279) 00:14:32.476 fused_ordering(280) 00:14:32.476 fused_ordering(281) 00:14:32.476 fused_ordering(282) 00:14:32.476 fused_ordering(283) 00:14:32.476 fused_ordering(284) 00:14:32.476 fused_ordering(285) 00:14:32.476 fused_ordering(286) 00:14:32.476 fused_ordering(287) 00:14:32.476 fused_ordering(288) 00:14:32.476 fused_ordering(289) 00:14:32.476 fused_ordering(290) 00:14:32.476 fused_ordering(291) 00:14:32.476 fused_ordering(292) 00:14:32.476 fused_ordering(293) 00:14:32.476 fused_ordering(294) 00:14:32.476 fused_ordering(295) 00:14:32.476 fused_ordering(296) 00:14:32.476 fused_ordering(297) 00:14:32.476 fused_ordering(298) 00:14:32.476 fused_ordering(299) 00:14:32.476 fused_ordering(300) 00:14:32.476 fused_ordering(301) 00:14:32.476 fused_ordering(302) 00:14:32.476 fused_ordering(303) 00:14:32.476 fused_ordering(304) 00:14:32.476 fused_ordering(305) 00:14:32.476 fused_ordering(306) 00:14:32.476 fused_ordering(307) 00:14:32.476 fused_ordering(308) 00:14:32.476 fused_ordering(309) 00:14:32.476 fused_ordering(310) 00:14:32.476 fused_ordering(311) 00:14:32.476 fused_ordering(312) 00:14:32.476 fused_ordering(313) 00:14:32.476 fused_ordering(314) 00:14:32.476 fused_ordering(315) 00:14:32.476 fused_ordering(316) 00:14:32.476 fused_ordering(317) 00:14:32.476 fused_ordering(318) 00:14:32.476 fused_ordering(319) 00:14:32.476 fused_ordering(320) 00:14:32.476 fused_ordering(321) 00:14:32.476 fused_ordering(322) 00:14:32.476 fused_ordering(323) 00:14:32.476 fused_ordering(324) 00:14:32.476 fused_ordering(325) 00:14:32.476 fused_ordering(326) 00:14:32.476 fused_ordering(327) 00:14:32.476 fused_ordering(328) 00:14:32.476 fused_ordering(329) 00:14:32.476 fused_ordering(330) 00:14:32.476 fused_ordering(331) 00:14:32.476 fused_ordering(332) 00:14:32.476 fused_ordering(333) 00:14:32.476 fused_ordering(334) 00:14:32.476 fused_ordering(335) 00:14:32.476 fused_ordering(336) 00:14:32.476 fused_ordering(337) 00:14:32.476 fused_ordering(338) 00:14:32.476 fused_ordering(339) 00:14:32.476 fused_ordering(340) 00:14:32.476 fused_ordering(341) 00:14:32.476 fused_ordering(342) 00:14:32.476 fused_ordering(343) 00:14:32.476 fused_ordering(344) 00:14:32.476 fused_ordering(345) 00:14:32.476 fused_ordering(346) 00:14:32.476 fused_ordering(347) 00:14:32.476 fused_ordering(348) 00:14:32.476 fused_ordering(349) 00:14:32.476 fused_ordering(350) 00:14:32.476 fused_ordering(351) 00:14:32.476 fused_ordering(352) 00:14:32.476 fused_ordering(353) 00:14:32.476 fused_ordering(354) 00:14:32.476 fused_ordering(355) 00:14:32.476 fused_ordering(356) 00:14:32.476 fused_ordering(357) 00:14:32.476 fused_ordering(358) 00:14:32.476 fused_ordering(359) 00:14:32.476 fused_ordering(360) 00:14:32.476 fused_ordering(361) 00:14:32.476 fused_ordering(362) 00:14:32.476 fused_ordering(363) 00:14:32.476 fused_ordering(364) 00:14:32.476 fused_ordering(365) 00:14:32.476 fused_ordering(366) 00:14:32.476 fused_ordering(367) 00:14:32.476 fused_ordering(368) 00:14:32.476 fused_ordering(369) 00:14:32.476 fused_ordering(370) 00:14:32.476 fused_ordering(371) 00:14:32.476 fused_ordering(372) 00:14:32.476 fused_ordering(373) 00:14:32.476 fused_ordering(374) 00:14:32.476 fused_ordering(375) 00:14:32.476 fused_ordering(376) 00:14:32.476 fused_ordering(377) 00:14:32.476 fused_ordering(378) 00:14:32.476 fused_ordering(379) 00:14:32.476 fused_ordering(380) 00:14:32.476 fused_ordering(381) 00:14:32.476 fused_ordering(382) 00:14:32.476 fused_ordering(383) 00:14:32.476 fused_ordering(384) 00:14:32.476 fused_ordering(385) 00:14:32.476 fused_ordering(386) 00:14:32.476 fused_ordering(387) 00:14:32.476 fused_ordering(388) 00:14:32.476 fused_ordering(389) 00:14:32.476 fused_ordering(390) 00:14:32.476 fused_ordering(391) 00:14:32.476 fused_ordering(392) 00:14:32.476 fused_ordering(393) 00:14:32.476 fused_ordering(394) 00:14:32.476 fused_ordering(395) 00:14:32.476 fused_ordering(396) 00:14:32.476 fused_ordering(397) 00:14:32.476 fused_ordering(398) 00:14:32.476 fused_ordering(399) 00:14:32.476 fused_ordering(400) 00:14:32.476 fused_ordering(401) 00:14:32.476 fused_ordering(402) 00:14:32.476 fused_ordering(403) 00:14:32.476 fused_ordering(404) 00:14:32.476 fused_ordering(405) 00:14:32.476 fused_ordering(406) 00:14:32.476 fused_ordering(407) 00:14:32.476 fused_ordering(408) 00:14:32.476 fused_ordering(409) 00:14:32.476 fused_ordering(410) 00:14:33.045 fused_ordering(411) 00:14:33.045 fused_ordering(412) 00:14:33.045 fused_ordering(413) 00:14:33.045 fused_ordering(414) 00:14:33.045 fused_ordering(415) 00:14:33.045 fused_ordering(416) 00:14:33.045 fused_ordering(417) 00:14:33.045 fused_ordering(418) 00:14:33.045 fused_ordering(419) 00:14:33.045 fused_ordering(420) 00:14:33.045 fused_ordering(421) 00:14:33.045 fused_ordering(422) 00:14:33.045 fused_ordering(423) 00:14:33.045 fused_ordering(424) 00:14:33.045 fused_ordering(425) 00:14:33.045 fused_ordering(426) 00:14:33.045 fused_ordering(427) 00:14:33.045 fused_ordering(428) 00:14:33.045 fused_ordering(429) 00:14:33.045 fused_ordering(430) 00:14:33.045 fused_ordering(431) 00:14:33.045 fused_ordering(432) 00:14:33.045 fused_ordering(433) 00:14:33.045 fused_ordering(434) 00:14:33.045 fused_ordering(435) 00:14:33.045 fused_ordering(436) 00:14:33.046 fused_ordering(437) 00:14:33.046 fused_ordering(438) 00:14:33.046 fused_ordering(439) 00:14:33.046 fused_ordering(440) 00:14:33.046 fused_ordering(441) 00:14:33.046 fused_ordering(442) 00:14:33.046 fused_ordering(443) 00:14:33.046 fused_ordering(444) 00:14:33.046 fused_ordering(445) 00:14:33.046 fused_ordering(446) 00:14:33.046 fused_ordering(447) 00:14:33.046 fused_ordering(448) 00:14:33.046 fused_ordering(449) 00:14:33.046 fused_ordering(450) 00:14:33.046 fused_ordering(451) 00:14:33.046 fused_ordering(452) 00:14:33.046 fused_ordering(453) 00:14:33.046 fused_ordering(454) 00:14:33.046 fused_ordering(455) 00:14:33.046 fused_ordering(456) 00:14:33.046 fused_ordering(457) 00:14:33.046 fused_ordering(458) 00:14:33.046 fused_ordering(459) 00:14:33.046 fused_ordering(460) 00:14:33.046 fused_ordering(461) 00:14:33.046 fused_ordering(462) 00:14:33.046 fused_ordering(463) 00:14:33.046 fused_ordering(464) 00:14:33.046 fused_ordering(465) 00:14:33.046 fused_ordering(466) 00:14:33.046 fused_ordering(467) 00:14:33.046 fused_ordering(468) 00:14:33.046 fused_ordering(469) 00:14:33.046 fused_ordering(470) 00:14:33.046 fused_ordering(471) 00:14:33.046 fused_ordering(472) 00:14:33.046 fused_ordering(473) 00:14:33.046 fused_ordering(474) 00:14:33.046 fused_ordering(475) 00:14:33.046 fused_ordering(476) 00:14:33.046 fused_ordering(477) 00:14:33.046 fused_ordering(478) 00:14:33.046 fused_ordering(479) 00:14:33.046 fused_ordering(480) 00:14:33.046 fused_ordering(481) 00:14:33.046 fused_ordering(482) 00:14:33.046 fused_ordering(483) 00:14:33.046 fused_ordering(484) 00:14:33.046 fused_ordering(485) 00:14:33.046 fused_ordering(486) 00:14:33.046 fused_ordering(487) 00:14:33.046 fused_ordering(488) 00:14:33.046 fused_ordering(489) 00:14:33.046 fused_ordering(490) 00:14:33.046 fused_ordering(491) 00:14:33.046 fused_ordering(492) 00:14:33.046 fused_ordering(493) 00:14:33.046 fused_ordering(494) 00:14:33.046 fused_ordering(495) 00:14:33.046 fused_ordering(496) 00:14:33.046 fused_ordering(497) 00:14:33.046 fused_ordering(498) 00:14:33.046 fused_ordering(499) 00:14:33.046 fused_ordering(500) 00:14:33.046 fused_ordering(501) 00:14:33.046 fused_ordering(502) 00:14:33.046 fused_ordering(503) 00:14:33.046 fused_ordering(504) 00:14:33.046 fused_ordering(505) 00:14:33.046 fused_ordering(506) 00:14:33.046 fused_ordering(507) 00:14:33.046 fused_ordering(508) 00:14:33.046 fused_ordering(509) 00:14:33.046 fused_ordering(510) 00:14:33.046 fused_ordering(511) 00:14:33.046 fused_ordering(512) 00:14:33.046 fused_ordering(513) 00:14:33.046 fused_ordering(514) 00:14:33.046 fused_ordering(515) 00:14:33.046 fused_ordering(516) 00:14:33.046 fused_ordering(517) 00:14:33.046 fused_ordering(518) 00:14:33.046 fused_ordering(519) 00:14:33.046 fused_ordering(520) 00:14:33.046 fused_ordering(521) 00:14:33.046 fused_ordering(522) 00:14:33.046 fused_ordering(523) 00:14:33.046 fused_ordering(524) 00:14:33.046 fused_ordering(525) 00:14:33.046 fused_ordering(526) 00:14:33.046 fused_ordering(527) 00:14:33.046 fused_ordering(528) 00:14:33.046 fused_ordering(529) 00:14:33.046 fused_ordering(530) 00:14:33.046 fused_ordering(531) 00:14:33.046 fused_ordering(532) 00:14:33.046 fused_ordering(533) 00:14:33.046 fused_ordering(534) 00:14:33.046 fused_ordering(535) 00:14:33.046 fused_ordering(536) 00:14:33.046 fused_ordering(537) 00:14:33.046 fused_ordering(538) 00:14:33.046 fused_ordering(539) 00:14:33.046 fused_ordering(540) 00:14:33.046 fused_ordering(541) 00:14:33.046 fused_ordering(542) 00:14:33.046 fused_ordering(543) 00:14:33.046 fused_ordering(544) 00:14:33.046 fused_ordering(545) 00:14:33.046 fused_ordering(546) 00:14:33.046 fused_ordering(547) 00:14:33.046 fused_ordering(548) 00:14:33.046 fused_ordering(549) 00:14:33.046 fused_ordering(550) 00:14:33.046 fused_ordering(551) 00:14:33.046 fused_ordering(552) 00:14:33.046 fused_ordering(553) 00:14:33.046 fused_ordering(554) 00:14:33.046 fused_ordering(555) 00:14:33.046 fused_ordering(556) 00:14:33.046 fused_ordering(557) 00:14:33.046 fused_ordering(558) 00:14:33.046 fused_ordering(559) 00:14:33.046 fused_ordering(560) 00:14:33.046 fused_ordering(561) 00:14:33.046 fused_ordering(562) 00:14:33.046 fused_ordering(563) 00:14:33.046 fused_ordering(564) 00:14:33.046 fused_ordering(565) 00:14:33.046 fused_ordering(566) 00:14:33.046 fused_ordering(567) 00:14:33.046 fused_ordering(568) 00:14:33.046 fused_ordering(569) 00:14:33.046 fused_ordering(570) 00:14:33.046 fused_ordering(571) 00:14:33.046 fused_ordering(572) 00:14:33.046 fused_ordering(573) 00:14:33.046 fused_ordering(574) 00:14:33.046 fused_ordering(575) 00:14:33.046 fused_ordering(576) 00:14:33.046 fused_ordering(577) 00:14:33.046 fused_ordering(578) 00:14:33.046 fused_ordering(579) 00:14:33.046 fused_ordering(580) 00:14:33.046 fused_ordering(581) 00:14:33.046 fused_ordering(582) 00:14:33.046 fused_ordering(583) 00:14:33.046 fused_ordering(584) 00:14:33.046 fused_ordering(585) 00:14:33.046 fused_ordering(586) 00:14:33.046 fused_ordering(587) 00:14:33.046 fused_ordering(588) 00:14:33.046 fused_ordering(589) 00:14:33.046 fused_ordering(590) 00:14:33.046 fused_ordering(591) 00:14:33.046 fused_ordering(592) 00:14:33.046 fused_ordering(593) 00:14:33.046 fused_ordering(594) 00:14:33.046 fused_ordering(595) 00:14:33.046 fused_ordering(596) 00:14:33.046 fused_ordering(597) 00:14:33.046 fused_ordering(598) 00:14:33.046 fused_ordering(599) 00:14:33.046 fused_ordering(600) 00:14:33.046 fused_ordering(601) 00:14:33.046 fused_ordering(602) 00:14:33.046 fused_ordering(603) 00:14:33.046 fused_ordering(604) 00:14:33.046 fused_ordering(605) 00:14:33.046 fused_ordering(606) 00:14:33.046 fused_ordering(607) 00:14:33.046 fused_ordering(608) 00:14:33.046 fused_ordering(609) 00:14:33.046 fused_ordering(610) 00:14:33.046 fused_ordering(611) 00:14:33.046 fused_ordering(612) 00:14:33.046 fused_ordering(613) 00:14:33.046 fused_ordering(614) 00:14:33.046 fused_ordering(615) 00:14:33.304 fused_ordering(616) 00:14:33.304 fused_ordering(617) 00:14:33.304 fused_ordering(618) 00:14:33.304 fused_ordering(619) 00:14:33.304 fused_ordering(620) 00:14:33.304 fused_ordering(621) 00:14:33.304 fused_ordering(622) 00:14:33.304 fused_ordering(623) 00:14:33.304 fused_ordering(624) 00:14:33.304 fused_ordering(625) 00:14:33.304 fused_ordering(626) 00:14:33.304 fused_ordering(627) 00:14:33.304 fused_ordering(628) 00:14:33.304 fused_ordering(629) 00:14:33.304 fused_ordering(630) 00:14:33.304 fused_ordering(631) 00:14:33.304 fused_ordering(632) 00:14:33.304 fused_ordering(633) 00:14:33.304 fused_ordering(634) 00:14:33.304 fused_ordering(635) 00:14:33.304 fused_ordering(636) 00:14:33.304 fused_ordering(637) 00:14:33.304 fused_ordering(638) 00:14:33.304 fused_ordering(639) 00:14:33.304 fused_ordering(640) 00:14:33.304 fused_ordering(641) 00:14:33.304 fused_ordering(642) 00:14:33.304 fused_ordering(643) 00:14:33.304 fused_ordering(644) 00:14:33.304 fused_ordering(645) 00:14:33.304 fused_ordering(646) 00:14:33.304 fused_ordering(647) 00:14:33.304 fused_ordering(648) 00:14:33.304 fused_ordering(649) 00:14:33.304 fused_ordering(650) 00:14:33.304 fused_ordering(651) 00:14:33.304 fused_ordering(652) 00:14:33.304 fused_ordering(653) 00:14:33.304 fused_ordering(654) 00:14:33.304 fused_ordering(655) 00:14:33.304 fused_ordering(656) 00:14:33.304 fused_ordering(657) 00:14:33.304 fused_ordering(658) 00:14:33.304 fused_ordering(659) 00:14:33.304 fused_ordering(660) 00:14:33.304 fused_ordering(661) 00:14:33.304 fused_ordering(662) 00:14:33.304 fused_ordering(663) 00:14:33.304 fused_ordering(664) 00:14:33.304 fused_ordering(665) 00:14:33.304 fused_ordering(666) 00:14:33.304 fused_ordering(667) 00:14:33.304 fused_ordering(668) 00:14:33.304 fused_ordering(669) 00:14:33.304 fused_ordering(670) 00:14:33.304 fused_ordering(671) 00:14:33.304 fused_ordering(672) 00:14:33.304 fused_ordering(673) 00:14:33.304 fused_ordering(674) 00:14:33.304 fused_ordering(675) 00:14:33.304 fused_ordering(676) 00:14:33.304 fused_ordering(677) 00:14:33.304 fused_ordering(678) 00:14:33.304 fused_ordering(679) 00:14:33.304 fused_ordering(680) 00:14:33.304 fused_ordering(681) 00:14:33.304 fused_ordering(682) 00:14:33.304 fused_ordering(683) 00:14:33.304 fused_ordering(684) 00:14:33.304 fused_ordering(685) 00:14:33.304 fused_ordering(686) 00:14:33.304 fused_ordering(687) 00:14:33.304 fused_ordering(688) 00:14:33.305 fused_ordering(689) 00:14:33.305 fused_ordering(690) 00:14:33.305 fused_ordering(691) 00:14:33.305 fused_ordering(692) 00:14:33.305 fused_ordering(693) 00:14:33.305 fused_ordering(694) 00:14:33.305 fused_ordering(695) 00:14:33.305 fused_ordering(696) 00:14:33.305 fused_ordering(697) 00:14:33.305 fused_ordering(698) 00:14:33.305 fused_ordering(699) 00:14:33.305 fused_ordering(700) 00:14:33.305 fused_ordering(701) 00:14:33.305 fused_ordering(702) 00:14:33.305 fused_ordering(703) 00:14:33.305 fused_ordering(704) 00:14:33.305 fused_ordering(705) 00:14:33.305 fused_ordering(706) 00:14:33.305 fused_ordering(707) 00:14:33.305 fused_ordering(708) 00:14:33.305 fused_ordering(709) 00:14:33.305 fused_ordering(710) 00:14:33.305 fused_ordering(711) 00:14:33.305 fused_ordering(712) 00:14:33.305 fused_ordering(713) 00:14:33.305 fused_ordering(714) 00:14:33.305 fused_ordering(715) 00:14:33.305 fused_ordering(716) 00:14:33.305 fused_ordering(717) 00:14:33.305 fused_ordering(718) 00:14:33.305 fused_ordering(719) 00:14:33.305 fused_ordering(720) 00:14:33.305 fused_ordering(721) 00:14:33.305 fused_ordering(722) 00:14:33.305 fused_ordering(723) 00:14:33.305 fused_ordering(724) 00:14:33.305 fused_ordering(725) 00:14:33.305 fused_ordering(726) 00:14:33.305 fused_ordering(727) 00:14:33.305 fused_ordering(728) 00:14:33.305 fused_ordering(729) 00:14:33.305 fused_ordering(730) 00:14:33.305 fused_ordering(731) 00:14:33.305 fused_ordering(732) 00:14:33.305 fused_ordering(733) 00:14:33.305 fused_ordering(734) 00:14:33.305 fused_ordering(735) 00:14:33.305 fused_ordering(736) 00:14:33.305 fused_ordering(737) 00:14:33.305 fused_ordering(738) 00:14:33.305 fused_ordering(739) 00:14:33.305 fused_ordering(740) 00:14:33.305 fused_ordering(741) 00:14:33.305 fused_ordering(742) 00:14:33.305 fused_ordering(743) 00:14:33.305 fused_ordering(744) 00:14:33.305 fused_ordering(745) 00:14:33.305 fused_ordering(746) 00:14:33.305 fused_ordering(747) 00:14:33.305 fused_ordering(748) 00:14:33.305 fused_ordering(749) 00:14:33.305 fused_ordering(750) 00:14:33.305 fused_ordering(751) 00:14:33.305 fused_ordering(752) 00:14:33.305 fused_ordering(753) 00:14:33.305 fused_ordering(754) 00:14:33.305 fused_ordering(755) 00:14:33.305 fused_ordering(756) 00:14:33.305 fused_ordering(757) 00:14:33.305 fused_ordering(758) 00:14:33.305 fused_ordering(759) 00:14:33.305 fused_ordering(760) 00:14:33.305 fused_ordering(761) 00:14:33.305 fused_ordering(762) 00:14:33.305 fused_ordering(763) 00:14:33.305 fused_ordering(764) 00:14:33.305 fused_ordering(765) 00:14:33.305 fused_ordering(766) 00:14:33.305 fused_ordering(767) 00:14:33.305 fused_ordering(768) 00:14:33.305 fused_ordering(769) 00:14:33.305 fused_ordering(770) 00:14:33.305 fused_ordering(771) 00:14:33.305 fused_ordering(772) 00:14:33.305 fused_ordering(773) 00:14:33.305 fused_ordering(774) 00:14:33.305 fused_ordering(775) 00:14:33.305 fused_ordering(776) 00:14:33.305 fused_ordering(777) 00:14:33.305 fused_ordering(778) 00:14:33.305 fused_ordering(779) 00:14:33.305 fused_ordering(780) 00:14:33.305 fused_ordering(781) 00:14:33.305 fused_ordering(782) 00:14:33.305 fused_ordering(783) 00:14:33.305 fused_ordering(784) 00:14:33.305 fused_ordering(785) 00:14:33.305 fused_ordering(786) 00:14:33.305 fused_ordering(787) 00:14:33.305 fused_ordering(788) 00:14:33.305 fused_ordering(789) 00:14:33.305 fused_ordering(790) 00:14:33.305 fused_ordering(791) 00:14:33.305 fused_ordering(792) 00:14:33.305 fused_ordering(793) 00:14:33.305 fused_ordering(794) 00:14:33.305 fused_ordering(795) 00:14:33.305 fused_ordering(796) 00:14:33.305 fused_ordering(797) 00:14:33.305 fused_ordering(798) 00:14:33.305 fused_ordering(799) 00:14:33.305 fused_ordering(800) 00:14:33.305 fused_ordering(801) 00:14:33.305 fused_ordering(802) 00:14:33.305 fused_ordering(803) 00:14:33.305 fused_ordering(804) 00:14:33.305 fused_ordering(805) 00:14:33.305 fused_ordering(806) 00:14:33.305 fused_ordering(807) 00:14:33.305 fused_ordering(808) 00:14:33.305 fused_ordering(809) 00:14:33.305 fused_ordering(810) 00:14:33.305 fused_ordering(811) 00:14:33.305 fused_ordering(812) 00:14:33.305 fused_ordering(813) 00:14:33.305 fused_ordering(814) 00:14:33.305 fused_ordering(815) 00:14:33.305 fused_ordering(816) 00:14:33.305 fused_ordering(817) 00:14:33.305 fused_ordering(818) 00:14:33.305 fused_ordering(819) 00:14:33.305 fused_ordering(820) 00:14:33.872 fused_ordering(821) 00:14:33.872 fused_ordering(822) 00:14:33.872 fused_ordering(823) 00:14:33.872 fused_ordering(824) 00:14:33.872 fused_ordering(825) 00:14:33.872 fused_ordering(826) 00:14:33.872 fused_ordering(827) 00:14:33.872 fused_ordering(828) 00:14:33.872 fused_ordering(829) 00:14:33.872 fused_ordering(830) 00:14:33.872 fused_ordering(831) 00:14:33.872 fused_ordering(832) 00:14:33.872 fused_ordering(833) 00:14:33.872 fused_ordering(834) 00:14:33.872 fused_ordering(835) 00:14:33.872 fused_ordering(836) 00:14:33.872 fused_ordering(837) 00:14:33.872 fused_ordering(838) 00:14:33.872 fused_ordering(839) 00:14:33.872 fused_ordering(840) 00:14:33.872 fused_ordering(841) 00:14:33.872 fused_ordering(842) 00:14:33.872 fused_ordering(843) 00:14:33.872 fused_ordering(844) 00:14:33.872 fused_ordering(845) 00:14:33.872 fused_ordering(846) 00:14:33.872 fused_ordering(847) 00:14:33.872 fused_ordering(848) 00:14:33.872 fused_ordering(849) 00:14:33.872 fused_ordering(850) 00:14:33.872 fused_ordering(851) 00:14:33.872 fused_ordering(852) 00:14:33.872 fused_ordering(853) 00:14:33.872 fused_ordering(854) 00:14:33.872 fused_ordering(855) 00:14:33.872 fused_ordering(856) 00:14:33.872 fused_ordering(857) 00:14:33.872 fused_ordering(858) 00:14:33.872 fused_ordering(859) 00:14:33.872 fused_ordering(860) 00:14:33.872 fused_ordering(861) 00:14:33.872 fused_ordering(862) 00:14:33.872 fused_ordering(863) 00:14:33.872 fused_ordering(864) 00:14:33.872 fused_ordering(865) 00:14:33.872 fused_ordering(866) 00:14:33.872 fused_ordering(867) 00:14:33.872 fused_ordering(868) 00:14:33.872 fused_ordering(869) 00:14:33.872 fused_ordering(870) 00:14:33.872 fused_ordering(871) 00:14:33.872 fused_ordering(872) 00:14:33.872 fused_ordering(873) 00:14:33.872 fused_ordering(874) 00:14:33.872 fused_ordering(875) 00:14:33.872 fused_ordering(876) 00:14:33.872 fused_ordering(877) 00:14:33.872 fused_ordering(878) 00:14:33.872 fused_ordering(879) 00:14:33.872 fused_ordering(880) 00:14:33.872 fused_ordering(881) 00:14:33.872 fused_ordering(882) 00:14:33.872 fused_ordering(883) 00:14:33.873 fused_ordering(884) 00:14:33.873 fused_ordering(885) 00:14:33.873 fused_ordering(886) 00:14:33.873 fused_ordering(887) 00:14:33.873 fused_ordering(888) 00:14:33.873 fused_ordering(889) 00:14:33.873 fused_ordering(890) 00:14:33.873 fused_ordering(891) 00:14:33.873 fused_ordering(892) 00:14:33.873 fused_ordering(893) 00:14:33.873 fused_ordering(894) 00:14:33.873 fused_ordering(895) 00:14:33.873 fused_ordering(896) 00:14:33.873 fused_ordering(897) 00:14:33.873 fused_ordering(898) 00:14:33.873 fused_ordering(899) 00:14:33.873 fused_ordering(900) 00:14:33.873 fused_ordering(901) 00:14:33.873 fused_ordering(902) 00:14:33.873 fused_ordering(903) 00:14:33.873 fused_ordering(904) 00:14:33.873 fused_ordering(905) 00:14:33.873 fused_ordering(906) 00:14:33.873 fused_ordering(907) 00:14:33.873 fused_ordering(908) 00:14:33.873 fused_ordering(909) 00:14:33.873 fused_ordering(910) 00:14:33.873 fused_ordering(911) 00:14:33.873 fused_ordering(912) 00:14:33.873 fused_ordering(913) 00:14:33.873 fused_ordering(914) 00:14:33.873 fused_ordering(915) 00:14:33.873 fused_ordering(916) 00:14:33.873 fused_ordering(917) 00:14:33.873 fused_ordering(918) 00:14:33.873 fused_ordering(919) 00:14:33.873 fused_ordering(920) 00:14:33.873 fused_ordering(921) 00:14:33.873 fused_ordering(922) 00:14:33.873 fused_ordering(923) 00:14:33.873 fused_ordering(924) 00:14:33.873 fused_ordering(925) 00:14:33.873 fused_ordering(926) 00:14:33.873 fused_ordering(927) 00:14:33.873 fused_ordering(928) 00:14:33.873 fused_ordering(929) 00:14:33.873 fused_ordering(930) 00:14:33.873 fused_ordering(931) 00:14:33.873 fused_ordering(932) 00:14:33.873 fused_ordering(933) 00:14:33.873 fused_ordering(934) 00:14:33.873 fused_ordering(935) 00:14:33.873 fused_ordering(936) 00:14:33.873 fused_ordering(937) 00:14:33.873 fused_ordering(938) 00:14:33.873 fused_ordering(939) 00:14:33.873 fused_ordering(940) 00:14:33.873 fused_ordering(941) 00:14:33.873 fused_ordering(942) 00:14:33.873 fused_ordering(943) 00:14:33.873 fused_ordering(944) 00:14:33.873 fused_ordering(945) 00:14:33.873 fused_ordering(946) 00:14:33.873 fused_ordering(947) 00:14:33.873 fused_ordering(948) 00:14:33.873 fused_ordering(949) 00:14:33.873 fused_ordering(950) 00:14:33.873 fused_ordering(951) 00:14:33.873 fused_ordering(952) 00:14:33.873 fused_ordering(953) 00:14:33.873 fused_ordering(954) 00:14:33.873 fused_ordering(955) 00:14:33.873 fused_ordering(956) 00:14:33.873 fused_ordering(957) 00:14:33.873 fused_ordering(958) 00:14:33.873 fused_ordering(959) 00:14:33.873 fused_ordering(960) 00:14:33.873 fused_ordering(961) 00:14:33.873 fused_ordering(962) 00:14:33.873 fused_ordering(963) 00:14:33.873 fused_ordering(964) 00:14:33.873 fused_ordering(965) 00:14:33.873 fused_ordering(966) 00:14:33.873 fused_ordering(967) 00:14:33.873 fused_ordering(968) 00:14:33.873 fused_ordering(969) 00:14:33.873 fused_ordering(970) 00:14:33.873 fused_ordering(971) 00:14:33.873 fused_ordering(972) 00:14:33.873 fused_ordering(973) 00:14:33.873 fused_ordering(974) 00:14:33.873 fused_ordering(975) 00:14:33.873 fused_ordering(976) 00:14:33.873 fused_ordering(977) 00:14:33.873 fused_ordering(978) 00:14:33.873 fused_ordering(979) 00:14:33.873 fused_ordering(980) 00:14:33.873 fused_ordering(981) 00:14:33.873 fused_ordering(982) 00:14:33.873 fused_ordering(983) 00:14:33.873 fused_ordering(984) 00:14:33.873 fused_ordering(985) 00:14:33.873 fused_ordering(986) 00:14:33.873 fused_ordering(987) 00:14:33.873 fused_ordering(988) 00:14:33.873 fused_ordering(989) 00:14:33.873 fused_ordering(990) 00:14:33.873 fused_ordering(991) 00:14:33.873 fused_ordering(992) 00:14:33.873 fused_ordering(993) 00:14:33.873 fused_ordering(994) 00:14:33.873 fused_ordering(995) 00:14:33.873 fused_ordering(996) 00:14:33.873 fused_ordering(997) 00:14:33.873 fused_ordering(998) 00:14:33.873 fused_ordering(999) 00:14:33.873 fused_ordering(1000) 00:14:33.873 fused_ordering(1001) 00:14:33.873 fused_ordering(1002) 00:14:33.873 fused_ordering(1003) 00:14:33.873 fused_ordering(1004) 00:14:33.873 fused_ordering(1005) 00:14:33.873 fused_ordering(1006) 00:14:33.873 fused_ordering(1007) 00:14:33.873 fused_ordering(1008) 00:14:33.873 fused_ordering(1009) 00:14:33.873 fused_ordering(1010) 00:14:33.873 fused_ordering(1011) 00:14:33.873 fused_ordering(1012) 00:14:33.873 fused_ordering(1013) 00:14:33.873 fused_ordering(1014) 00:14:33.873 fused_ordering(1015) 00:14:33.873 fused_ordering(1016) 00:14:33.873 fused_ordering(1017) 00:14:33.873 fused_ordering(1018) 00:14:33.873 fused_ordering(1019) 00:14:33.873 fused_ordering(1020) 00:14:33.873 fused_ordering(1021) 00:14:33.873 fused_ordering(1022) 00:14:33.873 fused_ordering(1023) 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.873 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.873 rmmod nvme_tcp 00:14:33.873 rmmod nvme_fabrics 00:14:33.873 rmmod nvme_keyring 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 199982 ']' 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 199982 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 199982 ']' 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 199982 00:14:34.132 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 199982 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 199982' 00:14:34.133 killing process with pid 199982 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 199982 00:14:34.133 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 199982 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.406 11:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.320 11:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:36.320 00:14:36.320 real 0m7.265s 00:14:36.320 user 0m5.063s 00:14:36.320 sys 0m2.759s 00:14:36.320 11:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.320 11:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.320 ************************************ 00:14:36.320 END TEST nvmf_fused_ordering 00:14:36.320 ************************************ 00:14:36.320 11:00:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:36.320 11:00:50 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:36.320 11:00:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:36.320 11:00:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.320 11:00:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.320 ************************************ 00:14:36.320 START TEST nvmf_delete_subsystem 00:14:36.320 ************************************ 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:36.321 * Looking for test storage... 00:14:36.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:36.321 11:00:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:38.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.862 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:38.863 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:38.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:38.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:14:38.863 00:14:38.863 --- 10.0.0.2 ping statistics --- 00:14:38.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.863 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:14:38.863 00:14:38.863 --- 10.0.0.1 ping statistics --- 00:14:38.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.863 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=202195 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 202195 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 202195 ']' 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.863 11:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.863 [2024-07-11 11:00:52.870546] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:14:38.863 [2024-07-11 11:00:52.870626] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.863 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.863 [2024-07-11 11:00:52.940723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:38.863 [2024-07-11 11:00:53.029231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.863 [2024-07-11 11:00:53.029307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.863 [2024-07-11 11:00:53.029320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.863 [2024-07-11 11:00:53.029332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.863 [2024-07-11 11:00:53.029342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.863 [2024-07-11 11:00:53.029424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.863 [2024-07-11 11:00:53.029429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.863 [2024-07-11 11:00:53.167633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.863 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.864 [2024-07-11 11:00:53.183810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.864 NULL1 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.864 Delay0 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=202287 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:38.864 11:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:38.864 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.864 [2024-07-11 11:00:53.258535] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.398 11:00:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.398 11:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.398 11:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:41.398 Write completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 starting I/O failed: -6 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 starting I/O failed: -6 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 starting I/O failed: -6 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Write completed with error (sct=0, sc=8) 00:14:41.398 Write completed with error (sct=0, sc=8) 00:14:41.398 starting I/O failed: -6 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 starting I/O failed: -6 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.398 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 [2024-07-11 11:00:55.339204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7840000c00 is same with the state(5) to be set 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 starting I/O failed: -6 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 [2024-07-11 11:00:55.339834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d140 is same with the state(5) to be set 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Read completed with error (sct=0, sc=8) 00:14:41.399 Write completed with error (sct=0, sc=8) 00:14:41.399 [2024-07-11 11:00:55.340121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f784000d600 is same with the state(5) to be set 00:14:41.973 [2024-07-11 11:00:56.313250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aaa30 is same with the state(5) to be set 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 [2024-07-11 11:00:56.341173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119c970 is same with the state(5) to be set 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 [2024-07-11 11:00:56.341420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ce30 is same with the state(5) to be set 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 [2024-07-11 11:00:56.341664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d450 is same with the state(5) to be set 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Write completed with error (sct=0, sc=8) 00:14:41.973 Read completed with error (sct=0, sc=8) 00:14:41.973 [2024-07-11 11:00:56.342427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f784000d2f0 is same with the state(5) to be set 00:14:41.973 Initializing NVMe Controllers 00:14:41.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.973 Controller IO queue size 128, less than required. 00:14:41.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:41.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:41.973 Initialization complete. Launching workers. 00:14:41.973 ======================================================== 00:14:41.973 Latency(us) 00:14:41.973 Device Information : IOPS MiB/s Average min max 00:14:41.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.21 0.09 1035297.99 1940.82 2002283.47 00:14:41.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.87 0.08 875953.70 503.55 2002434.17 00:14:41.973 ======================================================== 00:14:41.973 Total : 336.09 0.16 960921.63 503.55 2002434.17 00:14:41.973 00:14:41.973 [2024-07-11 11:00:56.342890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aaa30 (9): Bad file descriptor 00:14:41.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:41.973 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.973 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:41.973 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 202287 00:14:41.974 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:42.542 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:42.542 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 202287 00:14:42.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (202287) - No such process 00:14:42.542 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 202287 00:14:42.542 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:42.542 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 202287 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 202287 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.543 [2024-07-11 11:00:56.867783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=202742 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:42.543 11:00:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.543 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.543 [2024-07-11 11:00:56.921429] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:43.110 11:00:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.110 11:00:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:43.110 11:00:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.679 11:00:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.679 11:00:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:43.679 11:00:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.252 11:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.252 11:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:44.252 11:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.511 11:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.511 11:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:44.511 11:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.079 11:00:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.079 11:00:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:45.079 11:00:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.648 11:00:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.648 11:00:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:45.648 11:00:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.906 Initializing NVMe Controllers 00:14:45.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.906 Controller IO queue size 128, less than required. 00:14:45.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:45.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:45.906 Initialization complete. Launching workers. 00:14:45.906 ======================================================== 00:14:45.906 Latency(us) 00:14:45.906 Device Information : IOPS MiB/s Average min max 00:14:45.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004331.10 1000203.35 1012717.78 00:14:45.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004099.65 1000218.21 1012471.02 00:14:45.907 ======================================================== 00:14:45.907 Total : 256.00 0.12 1004215.37 1000203.35 1012717.78 00:14:45.907 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 202742 00:14:46.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (202742) - No such process 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 202742 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.167 rmmod nvme_tcp 00:14:46.167 rmmod nvme_fabrics 00:14:46.167 rmmod nvme_keyring 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 202195 ']' 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 202195 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 202195 ']' 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 202195 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202195 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202195' 00:14:46.167 killing process with pid 202195 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 202195 00:14:46.167 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 202195 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.427 11:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.337 11:01:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.337 00:14:48.337 real 0m12.089s 00:14:48.337 user 0m27.458s 00:14:48.337 sys 0m2.965s 00:14:48.337 11:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.337 11:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.337 ************************************ 00:14:48.337 END TEST nvmf_delete_subsystem 00:14:48.337 ************************************ 00:14:48.596 11:01:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:48.596 11:01:02 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:48.596 11:01:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:48.596 11:01:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.596 11:01:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 ************************************ 00:14:48.596 START TEST nvmf_ns_masking 00:14:48.596 ************************************ 00:14:48.596 11:01:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:48.597 * Looking for test storage... 00:14:48.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=546ad0a7-2b8d-4f0c-980b-a572bef8e244 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4469393f-d8cf-441f-9e8c-0e79150f40b8 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=23dde246-d8ed-4d8f-af9f-da8712cd7033 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.597 11:01:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.135 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:51.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:51.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:51.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:51.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:14:51.136 00:14:51.136 --- 10.0.0.2 ping statistics --- 00:14:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.136 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:14:51.136 00:14:51.136 --- 10.0.0.1 ping statistics --- 00:14:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.136 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=205088 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 205088 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 205088 ']' 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.136 [2024-07-11 11:01:05.260024] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:14:51.136 [2024-07-11 11:01:05.260113] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.136 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.136 [2024-07-11 11:01:05.322998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.136 [2024-07-11 11:01:05.406630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.136 [2024-07-11 11:01:05.406682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.136 [2024-07-11 11:01:05.406705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.136 [2024-07-11 11:01:05.406716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.136 [2024-07-11 11:01:05.406725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.136 [2024-07-11 11:01:05.406789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.136 11:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.396 [2024-07-11 11:01:05.767744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.396 11:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:51.396 11:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:51.396 11:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.655 Malloc1 00:14:51.655 11:01:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.243 Malloc2 00:14:52.243 11:01:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.243 11:01:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:52.509 11:01:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.777 [2024-07-11 11:01:07.106144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.777 11:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:52.777 11:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 23dde246-d8ed-4d8f-af9f-da8712cd7033 -a 10.0.0.2 -s 4420 -i 4 00:14:53.041 11:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.041 11:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.041 11:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.041 11:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.041 11:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.010 11:01:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.011 [ 0]:0x1 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5aab1487ad4743c184b7355d4fbee995 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5aab1487ad4743c184b7355d4fbee995 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.011 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.315 [ 0]:0x1 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5aab1487ad4743c184b7355d4fbee995 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5aab1487ad4743c184b7355d4fbee995 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.315 [ 1]:0x2 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:55.315 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.577 11:01:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.846 11:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 23dde246-d8ed-4d8f-af9f-da8712cd7033 -a 10.0.0.2 -s 4420 -i 4 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:56.130 11:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:58.758 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.759 [ 0]:0x2 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.759 11:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.759 [ 0]:0x1 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5aab1487ad4743c184b7355d4fbee995 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5aab1487ad4743c184b7355d4fbee995 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.759 [ 1]:0x2 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.759 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.037 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:59.310 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.311 [ 0]:0x2 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.311 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 23dde246-d8ed-4d8f-af9f-da8712cd7033 -a 10.0.0.2 -s 4420 -i 4 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:59.577 11:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.557 11:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:01.841 11:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.841 [ 0]:0x1 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.841 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5aab1487ad4743c184b7355d4fbee995 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5aab1487ad4743c184b7355d4fbee995 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.842 [ 1]:0x2 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.842 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.128 [ 0]:0x2 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.128 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.405 [2024-07-11 11:01:16.739164] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:02.405 request: 00:15:02.405 { 00:15:02.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.405 "nsid": 2, 00:15:02.405 "host": "nqn.2016-06.io.spdk:host1", 00:15:02.405 "method": "nvmf_ns_remove_host", 00:15:02.405 "req_id": 1 00:15:02.405 } 00:15:02.405 Got JSON-RPC error response 00:15:02.405 response: 00:15:02.405 { 00:15:02.405 "code": -32602, 00:15:02.405 "message": "Invalid parameters" 00:15:02.405 } 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.405 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.406 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.406 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:02.406 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.406 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.406 [ 0]:0x2 00:15:02.406 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.406 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fbaedae5d7694021a494388a5051f8df 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fbaedae5d7694021a494388a5051f8df != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=206627 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 206627 /var/tmp/host.sock 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 206627 ']' 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:02.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.678 11:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.678 [2024-07-11 11:01:16.916082] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:15:02.678 [2024-07-11 11:01:16.916161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206627 ] 00:15:02.678 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.678 [2024-07-11 11:01:16.974927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.678 [2024-07-11 11:01:17.061939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.946 11:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.946 11:01:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:02.946 11:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.223 11:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.496 11:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 546ad0a7-2b8d-4f0c-980b-a572bef8e244 00:15:03.496 11:01:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:03.496 11:01:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 546AD0A72B8D4F0C980BA572BEF8E244 -i 00:15:03.765 11:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4469393f-d8cf-441f-9e8c-0e79150f40b8 00:15:03.765 11:01:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:03.765 11:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4469393FD8CF441F9E8C0E79150F40B8 -i 00:15:04.045 11:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.330 11:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:04.607 11:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:04.607 11:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:04.878 nvme0n1 00:15:04.878 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:04.878 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:05.479 nvme1n2 00:15:05.479 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:05.479 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:05.479 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:05.479 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:05.479 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:05.479 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:05.480 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:05.480 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:05.480 11:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:05.754 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 546ad0a7-2b8d-4f0c-980b-a572bef8e244 == \5\4\6\a\d\0\a\7\-\2\b\8\d\-\4\f\0\c\-\9\8\0\b\-\a\5\7\2\b\e\f\8\e\2\4\4 ]] 00:15:05.754 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:05.754 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:05.754 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4469393f-d8cf-441f-9e8c-0e79150f40b8 == \4\4\6\9\3\9\3\f\-\d\8\c\f\-\4\4\1\f\-\9\e\8\c\-\0\e\7\9\1\5\0\f\4\0\b\8 ]] 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 206627 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 206627 ']' 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 206627 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 206627 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 206627' 00:15:06.025 killing process with pid 206627 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 206627 00:15:06.025 11:01:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 206627 00:15:06.599 11:01:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.857 rmmod nvme_tcp 00:15:06.857 rmmod nvme_fabrics 00:15:06.857 rmmod nvme_keyring 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 205088 ']' 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 205088 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 205088 ']' 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 205088 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 205088 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 205088' 00:15:06.857 killing process with pid 205088 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 205088 00:15:06.857 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 205088 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.116 11:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.653 11:01:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.653 00:15:09.653 real 0m20.660s 00:15:09.653 user 0m26.588s 00:15:09.653 sys 0m4.172s 00:15:09.653 11:01:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.653 11:01:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:09.653 ************************************ 00:15:09.653 END TEST nvmf_ns_masking 00:15:09.653 ************************************ 00:15:09.653 11:01:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:09.653 11:01:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:09.653 11:01:23 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:09.653 11:01:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:09.653 11:01:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.653 11:01:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.653 ************************************ 00:15:09.653 START TEST nvmf_nvme_cli 00:15:09.653 ************************************ 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:09.653 * Looking for test storage... 00:15:09.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.653 11:01:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.558 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:11.558 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:11.559 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:11.559 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:11.559 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:15:11.559 00:15:11.559 --- 10.0.0.2 ping statistics --- 00:15:11.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.559 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:15:11.559 00:15:11.559 --- 10.0.0.1 ping statistics --- 00:15:11.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.559 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=209138 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 209138 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 209138 ']' 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.559 11:01:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.559 [2024-07-11 11:01:25.883524] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:15:11.559 [2024-07-11 11:01:25.883600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.559 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.559 [2024-07-11 11:01:25.945814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.817 [2024-07-11 11:01:26.028498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.817 [2024-07-11 11:01:26.028553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.817 [2024-07-11 11:01:26.028582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.817 [2024-07-11 11:01:26.028592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.817 [2024-07-11 11:01:26.028602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.817 [2024-07-11 11:01:26.028683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.817 [2024-07-11 11:01:26.028748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.817 [2024-07-11 11:01:26.028817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.817 [2024-07-11 11:01:26.028820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.817 [2024-07-11 11:01:26.182647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.817 Malloc0 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.817 Malloc1 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.817 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 [2024-07-11 11:01:26.265275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:12.076 00:15:12.076 Discovery Log Number of Records 2, Generation counter 2 00:15:12.076 =====Discovery Log Entry 0====== 00:15:12.076 trtype: tcp 00:15:12.076 adrfam: ipv4 00:15:12.076 subtype: current discovery subsystem 00:15:12.076 treq: not required 00:15:12.076 portid: 0 00:15:12.076 trsvcid: 4420 00:15:12.076 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:12.076 traddr: 10.0.0.2 00:15:12.076 eflags: explicit discovery connections, duplicate discovery information 00:15:12.076 sectype: none 00:15:12.076 =====Discovery Log Entry 1====== 00:15:12.076 trtype: tcp 00:15:12.076 adrfam: ipv4 00:15:12.076 subtype: nvme subsystem 00:15:12.076 treq: not required 00:15:12.076 portid: 0 00:15:12.076 trsvcid: 4420 00:15:12.076 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:12.076 traddr: 10.0.0.2 00:15:12.076 eflags: none 00:15:12.076 sectype: none 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:12.076 11:01:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.645 11:01:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:12.645 11:01:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.645 11:01:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.645 11:01:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:12.645 11:01:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:12.645 11:01:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:15.180 /dev/nvme0n1 ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:15.180 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.441 rmmod nvme_tcp 00:15:15.441 rmmod nvme_fabrics 00:15:15.441 rmmod nvme_keyring 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 209138 ']' 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 209138 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 209138 ']' 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 209138 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 209138 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 209138' 00:15:15.441 killing process with pid 209138 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 209138 00:15:15.441 11:01:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 209138 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.700 11:01:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.239 11:01:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.239 00:15:18.239 real 0m8.551s 00:15:18.239 user 0m16.317s 00:15:18.239 sys 0m2.284s 00:15:18.239 11:01:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.239 11:01:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:18.239 ************************************ 00:15:18.239 END TEST nvmf_nvme_cli 00:15:18.239 ************************************ 00:15:18.239 11:01:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:18.239 11:01:32 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:18.239 11:01:32 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.239 11:01:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.239 11:01:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.239 11:01:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.239 ************************************ 00:15:18.239 START TEST nvmf_vfio_user 00:15:18.239 ************************************ 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.239 * Looking for test storage... 00:15:18.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.239 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=210056 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 210056' 00:15:18.240 Process pid: 210056 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 210056 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 210056 ']' 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:18.240 [2024-07-11 11:01:32.237447] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:15:18.240 [2024-07-11 11:01:32.237540] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.240 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.240 [2024-07-11 11:01:32.296355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.240 [2024-07-11 11:01:32.381241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.240 [2024-07-11 11:01:32.381309] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.240 [2024-07-11 11:01:32.381323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.240 [2024-07-11 11:01:32.381334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.240 [2024-07-11 11:01:32.381358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.240 [2024-07-11 11:01:32.381455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.240 [2024-07-11 11:01:32.381517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.240 [2024-07-11 11:01:32.381584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.240 [2024-07-11 11:01:32.381586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:18.240 11:01:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:19.176 11:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:19.432 11:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:19.432 11:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:19.432 11:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.432 11:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:19.432 11:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.688 Malloc1 00:15:19.688 11:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:19.958 11:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:20.524 11:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:20.524 11:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.524 11:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:20.524 11:01:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:20.782 Malloc2 00:15:20.782 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:21.040 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:21.298 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.557 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:21.557 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:21.557 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.557 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.557 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.557 11:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.557 [2024-07-11 11:01:35.965627] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:15:21.557 [2024-07-11 11:01:35.965664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210485 ] 00:15:21.557 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.817 [2024-07-11 11:01:35.999123] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:21.817 [2024-07-11 11:01:36.007248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.817 [2024-07-11 11:01:36.007275] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe58e213000 00:15:21.817 [2024-07-11 11:01:36.008241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.009233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.010239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.011244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.012247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.013256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.014263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.015267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.817 [2024-07-11 11:01:36.016274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.817 [2024-07-11 11:01:36.016295] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe58cfc7000 00:15:21.817 [2024-07-11 11:01:36.017415] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.817 [2024-07-11 11:01:36.033449] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:21.817 [2024-07-11 11:01:36.033492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:21.817 [2024-07-11 11:01:36.038432] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.817 [2024-07-11 11:01:36.038486] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.817 [2024-07-11 11:01:36.038582] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:21.817 [2024-07-11 11:01:36.038612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:21.817 [2024-07-11 11:01:36.038622] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:21.817 [2024-07-11 11:01:36.039418] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:21.817 [2024-07-11 11:01:36.039439] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:21.817 [2024-07-11 11:01:36.039451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:21.817 [2024-07-11 11:01:36.040421] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.817 [2024-07-11 11:01:36.040440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:21.817 [2024-07-11 11:01:36.040453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.817 [2024-07-11 11:01:36.041423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:21.817 [2024-07-11 11:01:36.041440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.817 [2024-07-11 11:01:36.042432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:21.817 [2024-07-11 11:01:36.042451] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:21.817 [2024-07-11 11:01:36.042460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:21.817 [2024-07-11 11:01:36.042471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.817 [2024-07-11 11:01:36.042580] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:21.817 [2024-07-11 11:01:36.042587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.817 [2024-07-11 11:01:36.042596] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:21.817 [2024-07-11 11:01:36.043442] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:21.817 [2024-07-11 11:01:36.044447] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:21.817 [2024-07-11 11:01:36.045452] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.817 [2024-07-11 11:01:36.046448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.817 [2024-07-11 11:01:36.046554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.817 [2024-07-11 11:01:36.047461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:21.817 [2024-07-11 11:01:36.047479] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.817 [2024-07-11 11:01:36.047491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:21.817 [2024-07-11 11:01:36.047515] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:21.817 [2024-07-11 11:01:36.047528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.817 [2024-07-11 11:01:36.047553] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.817 [2024-07-11 11:01:36.047562] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.817 [2024-07-11 11:01:36.047582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.817 [2024-07-11 11:01:36.047639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.817 [2024-07-11 11:01:36.047656] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:21.817 [2024-07-11 11:01:36.047667] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:21.818 [2024-07-11 11:01:36.047675] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:21.818 [2024-07-11 11:01:36.047682] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.818 [2024-07-11 11:01:36.047689] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:21.818 [2024-07-11 11:01:36.047697] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:21.818 [2024-07-11 11:01:36.047705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.047775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.047809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.818 [2024-07-11 11:01:36.047824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.818 [2024-07-11 11:01:36.047837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.818 [2024-07-11 11:01:36.047849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.818 [2024-07-11 11:01:36.047858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.047917] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:21.818 [2024-07-11 11:01:36.047926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.047961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.047977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048085] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.818 [2024-07-11 11:01:36.048093] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.818 [2024-07-11 11:01:36.048102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048136] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:21.818 [2024-07-11 11:01:36.048156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048181] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.818 [2024-07-11 11:01:36.048189] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.818 [2024-07-11 11:01:36.048198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048271] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.818 [2024-07-11 11:01:36.048278] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.818 [2024-07-11 11:01:36.048287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048375] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.818 [2024-07-11 11:01:36.048382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:21.818 [2024-07-11 11:01:36.048390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:21.818 [2024-07-11 11:01:36.048415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048538] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.818 [2024-07-11 11:01:36.048547] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.818 [2024-07-11 11:01:36.048553] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.818 [2024-07-11 11:01:36.048559] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.818 [2024-07-11 11:01:36.048567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.818 [2024-07-11 11:01:36.048578] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.818 [2024-07-11 11:01:36.048585] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.818 [2024-07-11 11:01:36.048593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048603] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.818 [2024-07-11 11:01:36.048611] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.818 [2024-07-11 11:01:36.048622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048634] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.818 [2024-07-11 11:01:36.048641] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.818 [2024-07-11 11:01:36.048649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.818 [2024-07-11 11:01:36.048660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.818 [2024-07-11 11:01:36.048705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.818 ===================================================== 00:15:21.818 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.818 ===================================================== 00:15:21.818 Controller Capabilities/Features 00:15:21.818 ================================ 00:15:21.819 Vendor ID: 4e58 00:15:21.819 Subsystem Vendor ID: 4e58 00:15:21.819 Serial Number: SPDK1 00:15:21.819 Model Number: SPDK bdev Controller 00:15:21.819 Firmware Version: 24.09 00:15:21.819 Recommended Arb Burst: 6 00:15:21.819 IEEE OUI Identifier: 8d 6b 50 00:15:21.819 Multi-path I/O 00:15:21.819 May have multiple subsystem ports: Yes 00:15:21.819 May have multiple controllers: Yes 00:15:21.819 Associated with SR-IOV VF: No 00:15:21.819 Max Data Transfer Size: 131072 00:15:21.819 Max Number of Namespaces: 32 00:15:21.819 Max Number of I/O Queues: 127 00:15:21.819 NVMe Specification Version (VS): 1.3 00:15:21.819 NVMe Specification Version (Identify): 1.3 00:15:21.819 Maximum Queue Entries: 256 00:15:21.819 Contiguous Queues Required: Yes 00:15:21.819 Arbitration Mechanisms Supported 00:15:21.819 Weighted Round Robin: Not Supported 00:15:21.819 Vendor Specific: Not Supported 00:15:21.819 Reset Timeout: 15000 ms 00:15:21.819 Doorbell Stride: 4 bytes 00:15:21.819 NVM Subsystem Reset: Not Supported 00:15:21.819 Command Sets Supported 00:15:21.819 NVM Command Set: Supported 00:15:21.819 Boot Partition: Not Supported 00:15:21.819 Memory Page Size Minimum: 4096 bytes 00:15:21.819 Memory Page Size Maximum: 4096 bytes 00:15:21.819 Persistent Memory Region: Not Supported 00:15:21.819 Optional Asynchronous Events Supported 00:15:21.819 Namespace Attribute Notices: Supported 00:15:21.819 Firmware Activation Notices: Not Supported 00:15:21.819 ANA Change Notices: Not Supported 00:15:21.819 PLE Aggregate Log Change Notices: Not Supported 00:15:21.819 LBA Status Info Alert Notices: Not Supported 00:15:21.819 EGE Aggregate Log Change Notices: Not Supported 00:15:21.819 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.819 Zone Descriptor Change Notices: Not Supported 00:15:21.819 Discovery Log Change Notices: Not Supported 00:15:21.819 Controller Attributes 00:15:21.819 128-bit Host Identifier: Supported 00:15:21.819 Non-Operational Permissive Mode: Not Supported 00:15:21.819 NVM Sets: Not Supported 00:15:21.819 Read Recovery Levels: Not Supported 00:15:21.819 Endurance Groups: Not Supported 00:15:21.819 Predictable Latency Mode: Not Supported 00:15:21.819 Traffic Based Keep ALive: Not Supported 00:15:21.819 Namespace Granularity: Not Supported 00:15:21.819 SQ Associations: Not Supported 00:15:21.819 UUID List: Not Supported 00:15:21.819 Multi-Domain Subsystem: Not Supported 00:15:21.819 Fixed Capacity Management: Not Supported 00:15:21.819 Variable Capacity Management: Not Supported 00:15:21.819 Delete Endurance Group: Not Supported 00:15:21.819 Delete NVM Set: Not Supported 00:15:21.819 Extended LBA Formats Supported: Not Supported 00:15:21.819 Flexible Data Placement Supported: Not Supported 00:15:21.819 00:15:21.819 Controller Memory Buffer Support 00:15:21.819 ================================ 00:15:21.819 Supported: No 00:15:21.819 00:15:21.819 Persistent Memory Region Support 00:15:21.819 ================================ 00:15:21.819 Supported: No 00:15:21.819 00:15:21.819 Admin Command Set Attributes 00:15:21.819 ============================ 00:15:21.819 Security Send/Receive: Not Supported 00:15:21.819 Format NVM: Not Supported 00:15:21.819 Firmware Activate/Download: Not Supported 00:15:21.819 Namespace Management: Not Supported 00:15:21.819 Device Self-Test: Not Supported 00:15:21.819 Directives: Not Supported 00:15:21.819 NVMe-MI: Not Supported 00:15:21.819 Virtualization Management: Not Supported 00:15:21.819 Doorbell Buffer Config: Not Supported 00:15:21.819 Get LBA Status Capability: Not Supported 00:15:21.819 Command & Feature Lockdown Capability: Not Supported 00:15:21.819 Abort Command Limit: 4 00:15:21.819 Async Event Request Limit: 4 00:15:21.819 Number of Firmware Slots: N/A 00:15:21.819 Firmware Slot 1 Read-Only: N/A 00:15:21.819 Firmware Activation Without Reset: N/A 00:15:21.819 Multiple Update Detection Support: N/A 00:15:21.819 Firmware Update Granularity: No Information Provided 00:15:21.819 Per-Namespace SMART Log: No 00:15:21.819 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.819 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:21.819 Command Effects Log Page: Supported 00:15:21.819 Get Log Page Extended Data: Supported 00:15:21.819 Telemetry Log Pages: Not Supported 00:15:21.819 Persistent Event Log Pages: Not Supported 00:15:21.819 Supported Log Pages Log Page: May Support 00:15:21.819 Commands Supported & Effects Log Page: Not Supported 00:15:21.819 Feature Identifiers & Effects Log Page:May Support 00:15:21.819 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.819 Data Area 4 for Telemetry Log: Not Supported 00:15:21.819 Error Log Page Entries Supported: 128 00:15:21.819 Keep Alive: Supported 00:15:21.819 Keep Alive Granularity: 10000 ms 00:15:21.819 00:15:21.819 NVM Command Set Attributes 00:15:21.819 ========================== 00:15:21.819 Submission Queue Entry Size 00:15:21.819 Max: 64 00:15:21.819 Min: 64 00:15:21.819 Completion Queue Entry Size 00:15:21.819 Max: 16 00:15:21.819 Min: 16 00:15:21.819 Number of Namespaces: 32 00:15:21.819 Compare Command: Supported 00:15:21.819 Write Uncorrectable Command: Not Supported 00:15:21.819 Dataset Management Command: Supported 00:15:21.819 Write Zeroes Command: Supported 00:15:21.819 Set Features Save Field: Not Supported 00:15:21.819 Reservations: Not Supported 00:15:21.819 Timestamp: Not Supported 00:15:21.819 Copy: Supported 00:15:21.819 Volatile Write Cache: Present 00:15:21.819 Atomic Write Unit (Normal): 1 00:15:21.819 Atomic Write Unit (PFail): 1 00:15:21.819 Atomic Compare & Write Unit: 1 00:15:21.819 Fused Compare & Write: Supported 00:15:21.819 Scatter-Gather List 00:15:21.819 SGL Command Set: Supported (Dword aligned) 00:15:21.819 SGL Keyed: Not Supported 00:15:21.819 SGL Bit Bucket Descriptor: Not Supported 00:15:21.819 SGL Metadata Pointer: Not Supported 00:15:21.819 Oversized SGL: Not Supported 00:15:21.819 SGL Metadata Address: Not Supported 00:15:21.819 SGL Offset: Not Supported 00:15:21.819 Transport SGL Data Block: Not Supported 00:15:21.819 Replay Protected Memory Block: Not Supported 00:15:21.819 00:15:21.819 Firmware Slot Information 00:15:21.819 ========================= 00:15:21.819 Active slot: 1 00:15:21.819 Slot 1 Firmware Revision: 24.09 00:15:21.819 00:15:21.819 00:15:21.819 Commands Supported and Effects 00:15:21.819 ============================== 00:15:21.819 Admin Commands 00:15:21.819 -------------- 00:15:21.819 Get Log Page (02h): Supported 00:15:21.819 Identify (06h): Supported 00:15:21.819 Abort (08h): Supported 00:15:21.819 Set Features (09h): Supported 00:15:21.819 Get Features (0Ah): Supported 00:15:21.819 Asynchronous Event Request (0Ch): Supported 00:15:21.819 Keep Alive (18h): Supported 00:15:21.819 I/O Commands 00:15:21.819 ------------ 00:15:21.819 Flush (00h): Supported LBA-Change 00:15:21.819 Write (01h): Supported LBA-Change 00:15:21.819 Read (02h): Supported 00:15:21.819 Compare (05h): Supported 00:15:21.819 Write Zeroes (08h): Supported LBA-Change 00:15:21.819 Dataset Management (09h): Supported LBA-Change 00:15:21.819 Copy (19h): Supported LBA-Change 00:15:21.819 00:15:21.819 Error Log 00:15:21.819 ========= 00:15:21.819 00:15:21.819 Arbitration 00:15:21.819 =========== 00:15:21.819 Arbitration Burst: 1 00:15:21.819 00:15:21.819 Power Management 00:15:21.819 ================ 00:15:21.819 Number of Power States: 1 00:15:21.819 Current Power State: Power State #0 00:15:21.819 Power State #0: 00:15:21.819 Max Power: 0.00 W 00:15:21.819 Non-Operational State: Operational 00:15:21.819 Entry Latency: Not Reported 00:15:21.819 Exit Latency: Not Reported 00:15:21.819 Relative Read Throughput: 0 00:15:21.819 Relative Read Latency: 0 00:15:21.819 Relative Write Throughput: 0 00:15:21.819 Relative Write Latency: 0 00:15:21.819 Idle Power: Not Reported 00:15:21.819 Active Power: Not Reported 00:15:21.819 Non-Operational Permissive Mode: Not Supported 00:15:21.819 00:15:21.819 Health Information 00:15:21.819 ================== 00:15:21.819 Critical Warnings: 00:15:21.819 Available Spare Space: OK 00:15:21.819 Temperature: OK 00:15:21.819 Device Reliability: OK 00:15:21.819 Read Only: No 00:15:21.819 Volatile Memory Backup: OK 00:15:21.819 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:21.819 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.819 Available Spare: 0% 00:15:21.819 Available Sp[2024-07-11 11:01:36.048856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.820 [2024-07-11 11:01:36.048873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.820 [2024-07-11 11:01:36.048922] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:21.820 [2024-07-11 11:01:36.048941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.820 [2024-07-11 11:01:36.048952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.820 [2024-07-11 11:01:36.048962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.820 [2024-07-11 11:01:36.048972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.820 [2024-07-11 11:01:36.052765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.820 [2024-07-11 11:01:36.052788] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:21.820 [2024-07-11 11:01:36.053489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.820 [2024-07-11 11:01:36.053579] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:21.820 [2024-07-11 11:01:36.053592] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:21.820 [2024-07-11 11:01:36.054498] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:21.820 [2024-07-11 11:01:36.054522] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:21.820 [2024-07-11 11:01:36.054576] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:21.820 [2024-07-11 11:01:36.056537] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.820 are Threshold: 0% 00:15:21.820 Life Percentage Used: 0% 00:15:21.820 Data Units Read: 0 00:15:21.820 Data Units Written: 0 00:15:21.820 Host Read Commands: 0 00:15:21.820 Host Write Commands: 0 00:15:21.820 Controller Busy Time: 0 minutes 00:15:21.820 Power Cycles: 0 00:15:21.820 Power On Hours: 0 hours 00:15:21.820 Unsafe Shutdowns: 0 00:15:21.820 Unrecoverable Media Errors: 0 00:15:21.820 Lifetime Error Log Entries: 0 00:15:21.820 Warning Temperature Time: 0 minutes 00:15:21.820 Critical Temperature Time: 0 minutes 00:15:21.820 00:15:21.820 Number of Queues 00:15:21.820 ================ 00:15:21.820 Number of I/O Submission Queues: 127 00:15:21.820 Number of I/O Completion Queues: 127 00:15:21.820 00:15:21.820 Active Namespaces 00:15:21.820 ================= 00:15:21.820 Namespace ID:1 00:15:21.820 Error Recovery Timeout: Unlimited 00:15:21.820 Command Set Identifier: NVM (00h) 00:15:21.820 Deallocate: Supported 00:15:21.820 Deallocated/Unwritten Error: Not Supported 00:15:21.820 Deallocated Read Value: Unknown 00:15:21.820 Deallocate in Write Zeroes: Not Supported 00:15:21.820 Deallocated Guard Field: 0xFFFF 00:15:21.820 Flush: Supported 00:15:21.820 Reservation: Supported 00:15:21.820 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.820 Size (in LBAs): 131072 (0GiB) 00:15:21.820 Capacity (in LBAs): 131072 (0GiB) 00:15:21.820 Utilization (in LBAs): 131072 (0GiB) 00:15:21.820 NGUID: 61AFA3CEDDE247BF96A8098800089327 00:15:21.820 UUID: 61afa3ce-dde2-47bf-96a8-098800089327 00:15:21.820 Thin Provisioning: Not Supported 00:15:21.820 Per-NS Atomic Units: Yes 00:15:21.820 Atomic Boundary Size (Normal): 0 00:15:21.820 Atomic Boundary Size (PFail): 0 00:15:21.820 Atomic Boundary Offset: 0 00:15:21.820 Maximum Single Source Range Length: 65535 00:15:21.820 Maximum Copy Length: 65535 00:15:21.820 Maximum Source Range Count: 1 00:15:21.820 NGUID/EUI64 Never Reused: No 00:15:21.820 Namespace Write Protected: No 00:15:21.820 Number of LBA Formats: 1 00:15:21.820 Current LBA Format: LBA Format #00 00:15:21.820 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.820 00:15:21.820 11:01:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.820 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.078 [2024-07-11 11:01:36.286580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.353 Initializing NVMe Controllers 00:15:27.353 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:27.353 Initialization complete. Launching workers. 00:15:27.353 ======================================================== 00:15:27.353 Latency(us) 00:15:27.353 Device Information : IOPS MiB/s Average min max 00:15:27.353 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34171.98 133.48 3745.74 1180.05 8166.56 00:15:27.353 ======================================================== 00:15:27.353 Total : 34171.98 133.48 3745.74 1180.05 8166.56 00:15:27.353 00:15:27.353 [2024-07-11 11:01:41.309836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.353 11:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.353 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.353 [2024-07-11 11:01:41.552997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.629 Initializing NVMe Controllers 00:15:32.629 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.629 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:32.629 Initialization complete. Launching workers. 00:15:32.629 ======================================================== 00:15:32.629 Latency(us) 00:15:32.629 Device Information : IOPS MiB/s Average min max 00:15:32.629 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.18 62.70 7982.83 5932.58 10983.76 00:15:32.629 ======================================================== 00:15:32.629 Total : 16051.18 62.70 7982.83 5932.58 10983.76 00:15:32.629 00:15:32.629 [2024-07-11 11:01:46.591114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.629 11:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.629 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.630 [2024-07-11 11:01:46.805144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.906 [2024-07-11 11:01:51.888159] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.906 Initializing NVMe Controllers 00:15:37.906 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.906 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:37.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:37.906 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:37.906 Initialization complete. Launching workers. 00:15:37.906 Starting thread on core 2 00:15:37.906 Starting thread on core 3 00:15:37.906 Starting thread on core 1 00:15:37.906 11:01:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:37.906 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.906 [2024-07-11 11:01:52.178694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.198 [2024-07-11 11:01:55.240714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.198 Initializing NVMe Controllers 00:15:41.198 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.198 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.198 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:41.198 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:41.198 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:41.198 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:41.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.199 Initialization complete. Launching workers. 00:15:41.199 Starting thread on core 1 with urgent priority queue 00:15:41.199 Starting thread on core 2 with urgent priority queue 00:15:41.199 Starting thread on core 3 with urgent priority queue 00:15:41.199 Starting thread on core 0 with urgent priority queue 00:15:41.199 SPDK bdev Controller (SPDK1 ) core 0: 5920.33 IO/s 16.89 secs/100000 ios 00:15:41.199 SPDK bdev Controller (SPDK1 ) core 1: 5557.33 IO/s 17.99 secs/100000 ios 00:15:41.199 SPDK bdev Controller (SPDK1 ) core 2: 6169.33 IO/s 16.21 secs/100000 ios 00:15:41.199 SPDK bdev Controller (SPDK1 ) core 3: 5827.33 IO/s 17.16 secs/100000 ios 00:15:41.199 ======================================================== 00:15:41.199 00:15:41.199 11:01:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.199 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.199 [2024-07-11 11:01:55.542272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.199 Initializing NVMe Controllers 00:15:41.199 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.199 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.199 Namespace ID: 1 size: 0GB 00:15:41.199 Initialization complete. 00:15:41.199 INFO: using host memory buffer for IO 00:15:41.199 Hello world! 00:15:41.199 [2024-07-11 11:01:55.575824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.459 11:01:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.459 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.459 [2024-07-11 11:01:55.859544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.839 Initializing NVMe Controllers 00:15:42.839 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.839 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.839 Initialization complete. Launching workers. 00:15:42.839 submit (in ns) avg, min, max = 8061.2, 3517.8, 4016981.1 00:15:42.839 complete (in ns) avg, min, max = 25462.5, 2070.0, 4024736.7 00:15:42.839 00:15:42.839 Submit histogram 00:15:42.839 ================ 00:15:42.839 Range in us Cumulative Count 00:15:42.839 3.508 - 3.532: 0.0295% ( 4) 00:15:42.839 3.532 - 3.556: 0.2211% ( 26) 00:15:42.839 3.556 - 3.579: 1.1348% ( 124) 00:15:42.839 3.579 - 3.603: 3.3454% ( 300) 00:15:42.839 3.603 - 3.627: 9.1224% ( 784) 00:15:42.839 3.627 - 3.650: 16.1963% ( 960) 00:15:42.839 3.650 - 3.674: 26.2693% ( 1367) 00:15:42.839 3.674 - 3.698: 35.3032% ( 1226) 00:15:42.839 3.698 - 3.721: 44.9930% ( 1315) 00:15:42.839 3.721 - 3.745: 52.1406% ( 970) 00:15:42.839 3.745 - 3.769: 57.8071% ( 769) 00:15:42.839 3.769 - 3.793: 62.8472% ( 684) 00:15:42.839 3.793 - 3.816: 66.6789% ( 520) 00:15:42.839 3.816 - 3.840: 70.0833% ( 462) 00:15:42.839 3.840 - 3.864: 73.3107% ( 438) 00:15:42.839 3.864 - 3.887: 76.5972% ( 446) 00:15:42.839 3.887 - 3.911: 80.3994% ( 516) 00:15:42.839 3.911 - 3.935: 83.4574% ( 415) 00:15:42.839 3.935 - 3.959: 86.2648% ( 381) 00:15:42.839 3.959 - 3.982: 88.4165% ( 292) 00:15:42.839 3.982 - 4.006: 90.1776% ( 239) 00:15:42.839 4.006 - 4.030: 91.8282% ( 224) 00:15:42.839 4.030 - 4.053: 93.0293% ( 163) 00:15:42.839 4.053 - 4.077: 93.9724% ( 128) 00:15:42.839 4.077 - 4.101: 94.7683% ( 108) 00:15:42.839 4.101 - 4.124: 95.4167% ( 88) 00:15:42.839 4.124 - 4.148: 95.9915% ( 78) 00:15:42.839 4.148 - 4.172: 96.3157% ( 44) 00:15:42.839 4.172 - 4.196: 96.5073% ( 26) 00:15:42.839 4.196 - 4.219: 96.6473% ( 19) 00:15:42.839 4.219 - 4.243: 96.7209% ( 10) 00:15:42.839 4.243 - 4.267: 96.8610% ( 19) 00:15:42.839 4.267 - 4.290: 97.0304% ( 23) 00:15:42.839 4.290 - 4.314: 97.1631% ( 18) 00:15:42.839 4.314 - 4.338: 97.2589% ( 13) 00:15:42.839 4.338 - 4.361: 97.3178% ( 8) 00:15:42.839 4.361 - 4.385: 97.3989% ( 11) 00:15:42.839 4.385 - 4.409: 97.4578% ( 8) 00:15:42.839 4.409 - 4.433: 97.4947% ( 5) 00:15:42.839 4.433 - 4.456: 97.5094% ( 2) 00:15:42.839 4.456 - 4.480: 97.5168% ( 1) 00:15:42.839 4.480 - 4.504: 97.5315% ( 2) 00:15:42.839 4.504 - 4.527: 97.5536% ( 3) 00:15:42.839 4.527 - 4.551: 97.5610% ( 1) 00:15:42.839 4.551 - 4.575: 97.5978% ( 5) 00:15:42.839 4.575 - 4.599: 97.6420% ( 6) 00:15:42.839 4.599 - 4.622: 97.6715% ( 4) 00:15:42.839 4.622 - 4.646: 97.7010% ( 4) 00:15:42.839 4.646 - 4.670: 97.7378% ( 5) 00:15:42.839 4.670 - 4.693: 97.7820% ( 6) 00:15:42.839 4.693 - 4.717: 97.8115% ( 4) 00:15:42.839 4.717 - 4.741: 97.8557% ( 6) 00:15:42.839 4.741 - 4.764: 97.8926% ( 5) 00:15:42.839 4.764 - 4.788: 97.9147% ( 3) 00:15:42.839 4.788 - 4.812: 97.9589% ( 6) 00:15:42.839 4.812 - 4.836: 97.9810% ( 3) 00:15:42.839 4.836 - 4.859: 98.0105% ( 4) 00:15:42.839 4.859 - 4.883: 98.0694% ( 8) 00:15:42.839 4.883 - 4.907: 98.1063% ( 5) 00:15:42.839 4.907 - 4.930: 98.1357% ( 4) 00:15:42.839 4.954 - 4.978: 98.1431% ( 1) 00:15:42.839 4.978 - 5.001: 98.1799% ( 5) 00:15:42.839 5.001 - 5.025: 98.2094% ( 4) 00:15:42.839 5.025 - 5.049: 98.2389% ( 4) 00:15:42.839 5.049 - 5.073: 98.2463% ( 1) 00:15:42.839 5.073 - 5.096: 98.2831% ( 5) 00:15:42.839 5.096 - 5.120: 98.3126% ( 4) 00:15:42.839 5.120 - 5.144: 98.3199% ( 1) 00:15:42.839 5.144 - 5.167: 98.3273% ( 1) 00:15:42.839 5.167 - 5.191: 98.3347% ( 1) 00:15:42.839 5.191 - 5.215: 98.3642% ( 4) 00:15:42.839 5.239 - 5.262: 98.3789% ( 2) 00:15:42.839 5.333 - 5.357: 98.3863% ( 1) 00:15:42.839 5.404 - 5.428: 98.3936% ( 1) 00:15:42.839 5.452 - 5.476: 98.4084% ( 2) 00:15:42.839 5.570 - 5.594: 98.4231% ( 2) 00:15:42.839 5.689 - 5.713: 98.4305% ( 1) 00:15:42.839 5.807 - 5.831: 98.4378% ( 1) 00:15:42.839 5.855 - 5.879: 98.4526% ( 2) 00:15:42.839 5.902 - 5.926: 98.4600% ( 1) 00:15:42.839 5.926 - 5.950: 98.4673% ( 1) 00:15:42.839 5.950 - 5.973: 98.4747% ( 1) 00:15:42.839 6.021 - 6.044: 98.4821% ( 1) 00:15:42.839 6.068 - 6.116: 98.4894% ( 1) 00:15:42.839 6.353 - 6.400: 98.4968% ( 1) 00:15:42.839 6.400 - 6.447: 98.5042% ( 1) 00:15:42.839 6.447 - 6.495: 98.5115% ( 1) 00:15:42.839 6.590 - 6.637: 98.5189% ( 1) 00:15:42.839 6.637 - 6.684: 98.5263% ( 1) 00:15:42.839 6.684 - 6.732: 98.5484% ( 3) 00:15:42.839 6.921 - 6.969: 98.5557% ( 1) 00:15:42.839 6.969 - 7.016: 98.5631% ( 1) 00:15:42.839 7.064 - 7.111: 98.5705% ( 1) 00:15:42.839 7.111 - 7.159: 98.5778% ( 1) 00:15:42.839 7.206 - 7.253: 98.5926% ( 2) 00:15:42.839 7.253 - 7.301: 98.6000% ( 1) 00:15:42.839 7.301 - 7.348: 98.6073% ( 1) 00:15:42.839 7.396 - 7.443: 98.6147% ( 1) 00:15:42.839 7.490 - 7.538: 98.6368% ( 3) 00:15:42.839 7.538 - 7.585: 98.6442% ( 1) 00:15:42.839 7.680 - 7.727: 98.6515% ( 1) 00:15:42.839 7.775 - 7.822: 98.6663% ( 2) 00:15:42.839 7.822 - 7.870: 98.6810% ( 2) 00:15:42.839 7.870 - 7.917: 98.6957% ( 2) 00:15:42.839 7.917 - 7.964: 98.7252% ( 4) 00:15:42.839 8.059 - 8.107: 98.7694% ( 6) 00:15:42.839 8.107 - 8.154: 98.7915% ( 3) 00:15:42.839 8.154 - 8.201: 98.7989% ( 1) 00:15:42.839 8.249 - 8.296: 98.8063% ( 1) 00:15:42.839 8.296 - 8.344: 98.8136% ( 1) 00:15:42.839 8.344 - 8.391: 98.8210% ( 1) 00:15:42.839 8.439 - 8.486: 98.8284% ( 1) 00:15:42.839 8.486 - 8.533: 98.8358% ( 1) 00:15:42.839 8.818 - 8.865: 98.8431% ( 1) 00:15:42.839 9.244 - 9.292: 98.8505% ( 1) 00:15:42.839 9.434 - 9.481: 98.8579% ( 1) 00:15:42.839 9.624 - 9.671: 98.8726% ( 2) 00:15:42.839 9.719 - 9.766: 98.8873% ( 2) 00:15:42.839 9.766 - 9.813: 98.8947% ( 1) 00:15:42.839 9.908 - 9.956: 98.9094% ( 2) 00:15:42.839 10.240 - 10.287: 98.9168% ( 1) 00:15:42.839 10.430 - 10.477: 98.9242% ( 1) 00:15:42.839 10.524 - 10.572: 98.9315% ( 1) 00:15:42.839 10.572 - 10.619: 98.9389% ( 1) 00:15:42.839 10.667 - 10.714: 98.9537% ( 2) 00:15:42.839 10.856 - 10.904: 98.9610% ( 1) 00:15:42.839 11.046 - 11.093: 98.9684% ( 1) 00:15:42.839 11.378 - 11.425: 98.9758% ( 1) 00:15:42.839 11.473 - 11.520: 98.9831% ( 1) 00:15:42.839 11.662 - 11.710: 98.9905% ( 1) 00:15:42.839 11.899 - 11.947: 98.9979% ( 1) 00:15:42.839 12.326 - 12.421: 99.0052% ( 1) 00:15:42.839 12.421 - 12.516: 99.0126% ( 1) 00:15:42.839 12.516 - 12.610: 99.0273% ( 2) 00:15:42.839 12.800 - 12.895: 99.0347% ( 1) 00:15:42.839 12.895 - 12.990: 99.0421% ( 1) 00:15:42.839 12.990 - 13.084: 99.0494% ( 1) 00:15:42.839 13.084 - 13.179: 99.0568% ( 1) 00:15:42.839 13.179 - 13.274: 99.0715% ( 2) 00:15:42.839 13.369 - 13.464: 99.0789% ( 1) 00:15:42.839 13.559 - 13.653: 99.0863% ( 1) 00:15:42.840 14.127 - 14.222: 99.0937% ( 1) 00:15:42.840 14.317 - 14.412: 99.1084% ( 2) 00:15:42.840 14.412 - 14.507: 99.1231% ( 2) 00:15:42.840 16.972 - 17.067: 99.1305% ( 1) 00:15:42.840 17.067 - 17.161: 99.1379% ( 1) 00:15:42.840 17.256 - 17.351: 99.1452% ( 1) 00:15:42.840 17.351 - 17.446: 99.1600% ( 2) 00:15:42.840 17.446 - 17.541: 99.1894% ( 4) 00:15:42.840 17.541 - 17.636: 99.2189% ( 4) 00:15:42.840 17.636 - 17.730: 99.2631% ( 6) 00:15:42.840 17.730 - 17.825: 99.3000% ( 5) 00:15:42.840 17.825 - 17.920: 99.3442% ( 6) 00:15:42.840 17.920 - 18.015: 99.4105% ( 9) 00:15:42.840 18.015 - 18.110: 99.4400% ( 4) 00:15:42.840 18.110 - 18.204: 99.4842% ( 6) 00:15:42.840 18.204 - 18.299: 99.5284% ( 6) 00:15:42.840 18.299 - 18.394: 99.5947% ( 9) 00:15:42.840 18.394 - 18.489: 99.6463% ( 7) 00:15:42.840 18.489 - 18.584: 99.6684% ( 3) 00:15:42.840 18.584 - 18.679: 99.6831% ( 2) 00:15:42.840 18.679 - 18.773: 99.7126% ( 4) 00:15:42.840 18.773 - 18.868: 99.7421% ( 4) 00:15:42.840 18.868 - 18.963: 99.7716% ( 4) 00:15:42.840 18.963 - 19.058: 99.7937% ( 3) 00:15:42.840 19.058 - 19.153: 99.8305% ( 5) 00:15:42.840 19.153 - 19.247: 99.8379% ( 1) 00:15:42.840 19.437 - 19.532: 99.8453% ( 1) 00:15:42.840 19.532 - 19.627: 99.8526% ( 1) 00:15:42.840 19.627 - 19.721: 99.8600% ( 1) 00:15:42.840 19.816 - 19.911: 99.8674% ( 1) 00:15:42.840 20.196 - 20.290: 99.8747% ( 1) 00:15:42.840 22.092 - 22.187: 99.8821% ( 1) 00:15:42.840 23.799 - 23.893: 99.8895% ( 1) 00:15:42.840 29.013 - 29.203: 99.8968% ( 1) 00:15:42.840 3980.705 - 4004.978: 99.9853% ( 12) 00:15:42.840 4004.978 - 4029.250: 100.0000% ( 2) 00:15:42.840 00:15:42.840 Complete histogram 00:15:42.840 ================== 00:15:42.840 Range in us Cumulative Count 00:15:42.840 2.062 - 2.074: 0.3169% ( 43) 00:15:42.840 2.074 - 2.086: 25.5766% ( 3428) 00:15:42.840 2.086 - 2.098: 44.3593% ( 2549) 00:15:42.840 2.098 - 2.110: 47.1741% ( 382) 00:15:42.840 2.110 - 2.121: 59.0155% ( 1607) 00:15:42.840 2.121 - 2.133: 63.0609% ( 549) 00:15:42.840 2.133 - 2.145: 66.1410% ( 418) 00:15:42.840 2.145 - 2.157: 78.0930% ( 1622) 00:15:42.840 2.157 - 2.169: 82.1973% ( 557) 00:15:42.840 2.169 - 2.181: 84.1721% ( 268) 00:15:42.840 2.181 - 2.193: 88.5270% ( 591) 00:15:42.840 2.193 - 2.204: 89.7723% ( 169) 00:15:42.840 2.204 - 2.216: 90.4723% ( 95) 00:15:42.840 2.216 - 2.228: 91.4597% ( 134) 00:15:42.840 2.228 - 2.240: 92.2408% ( 106) 00:15:42.840 2.240 - 2.252: 94.1051% ( 253) 00:15:42.840 2.252 - 2.264: 94.8714% ( 104) 00:15:42.840 2.264 - 2.276: 95.1146% ( 33) 00:15:42.840 2.276 - 2.287: 95.1956% ( 11) 00:15:42.840 2.287 - 2.299: 95.2693% ( 10) 00:15:42.840 2.299 - 2.311: 95.4167% ( 20) 00:15:42.840 2.311 - 2.323: 95.7557% ( 46) 00:15:42.840 2.323 - 2.335: 95.9546% ( 27) 00:15:42.840 2.335 - 2.347: 96.0209% ( 9) 00:15:42.840 2.347 - 2.359: 96.1536% ( 18) 00:15:42.840 2.359 - 2.370: 96.4409% ( 39) 00:15:42.840 2.370 - 2.382: 96.6767% ( 32) 00:15:42.840 2.382 - 2.394: 97.0746% ( 54) 00:15:42.840 2.394 - 2.406: 97.3989% ( 44) 00:15:42.840 2.406 - 2.418: 97.6126% ( 29) 00:15:42.840 2.418 - 2.430: 97.8189% ( 28) 00:15:42.840 2.430 - 2.441: 97.9736% ( 21) 00:15:42.840 2.441 - 2.453: 98.0989% ( 17) 00:15:42.840 2.453 - 2.465: 98.2020% ( 14) 00:15:42.840 2.465 - 2.477: 98.2684% ( 9) 00:15:42.840 2.477 - 2.489: 98.3421% ( 10) 00:15:42.840 2.489 - 2.501: 98.3642% ( 3) 00:15:42.840 2.501 - 2.513: 98.3863% ( 3) 00:15:42.840 2.513 - 2.524: 98.4010% ( 2) 00:15:42.840 2.524 - 2.536: 98.4452% ( 6) 00:15:42.840 2.536 - 2.548: 98.4600% ( 2) 00:15:42.840 2.548 - 2.560: 98.4747% ( 2) 00:15:42.840 2.560 - 2.572: 98.4894% ( 2) 00:15:42.840 2.572 - 2.584: 98.5042% ( 2) 00:15:42.840 2.596 - 2.607: 98.5115% ( 1) 00:15:42.840 2.607 - 2.619: 98.5189% ( 1) 00:15:42.840 2.619 - 2.631: 98.5263% ( 1) 00:15:42.840 2.631 - 2.643: 98.5410% ( 2) 00:15:42.840 2.667 - 2.679: 98.5484% ( 1) 00:15:42.840 2.679 - 2.690: 98.5557% ( 1) 00:15:42.840 2.690 - 2.702: 98.5778% ( 3) 00:15:42.840 2.702 - 2.714: 98.5852% ( 1) 00:15:42.840 2.714 - 2.726: 98.5926% ( 1) 00:15:42.840 2.738 - 2.750: 98.6073% ( 2) 00:15:42.840 2.761 - 2.773: 98.6221% ( 2) 00:15:42.840 2.785 - 2.797: 98.6294% ( 1) 00:15:42.840 2.797 - 2.809: 98.6442% ( 2) 00:15:42.840 2.844 - 2.856: 98.6515% ( 1) 00:15:42.840 2.963 - 2.975: 98.6589% ( 1) 00:15:42.840 2.999 - 3.010: 98.6663% ( 1) 00:15:42.840 3.247 - 3.271: 98.6736% ( 1) 00:15:42.840 3.295 - 3.319: 98.6810% ( 1) 00:15:42.840 3.319 - 3.342: 98.6884% ( 1) 00:15:42.840 3.342 - 3.366: 98.7105% ( 3) 00:15:42.840 3.366 - 3.390: 98.7252% ( 2) 00:15:42.840 3.390 - 3.413: 98.7400% ( 2) 00:15:42.840 3.437 - 3.461: 98.7621% ( 3) 00:15:42.840 3.461 - 3.484: 98.7694% ( 1) 00:15:42.840 3.484 - 3.508: 98.7768% ( 1) 00:15:42.840 3.532 - 3.556: 98.7842% ( 1) 00:15:42.840 3.627 - 3.650: 98.7915% ( 1) 00:15:42.840 3.650 - 3.674: 98.7989% ( 1) 00:15:42.840 3.698 - 3.721: 98.8063% ( 1) 00:15:42.840 3.745 - 3.769: 98.8136% ( 1) 00:15:42.840 3.769 - 3.793: 9[2024-07-11 11:01:56.881715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.840 8.8210% ( 1) 00:15:42.840 3.982 - 4.006: 98.8284% ( 1) 00:15:42.840 4.006 - 4.030: 98.8358% ( 1) 00:15:42.840 4.148 - 4.172: 98.8431% ( 1) 00:15:42.840 5.547 - 5.570: 98.8505% ( 1) 00:15:42.840 5.641 - 5.665: 98.8652% ( 2) 00:15:42.840 5.736 - 5.760: 98.8726% ( 1) 00:15:42.840 5.784 - 5.807: 98.8800% ( 1) 00:15:42.840 5.855 - 5.879: 98.8873% ( 1) 00:15:42.840 5.879 - 5.902: 98.8947% ( 1) 00:15:42.840 6.021 - 6.044: 98.9021% ( 1) 00:15:42.840 6.116 - 6.163: 98.9094% ( 1) 00:15:42.840 6.210 - 6.258: 98.9242% ( 2) 00:15:42.840 6.258 - 6.305: 98.9315% ( 1) 00:15:42.840 6.400 - 6.447: 98.9389% ( 1) 00:15:42.840 6.827 - 6.874: 98.9463% ( 1) 00:15:42.840 7.016 - 7.064: 98.9537% ( 1) 00:15:42.840 7.111 - 7.159: 98.9610% ( 1) 00:15:42.840 7.822 - 7.870: 98.9684% ( 1) 00:15:42.840 7.917 - 7.964: 98.9758% ( 1) 00:15:42.840 15.455 - 15.550: 98.9831% ( 1) 00:15:42.840 15.550 - 15.644: 98.9979% ( 2) 00:15:42.840 15.739 - 15.834: 99.0347% ( 5) 00:15:42.840 15.834 - 15.929: 99.0494% ( 2) 00:15:42.840 15.929 - 16.024: 99.0715% ( 3) 00:15:42.840 16.119 - 16.213: 99.0863% ( 2) 00:15:42.840 16.213 - 16.308: 99.1452% ( 8) 00:15:42.840 16.308 - 16.403: 99.1821% ( 5) 00:15:42.840 16.403 - 16.498: 99.1968% ( 2) 00:15:42.840 16.498 - 16.593: 99.2042% ( 1) 00:15:42.840 16.593 - 16.687: 99.2263% ( 3) 00:15:42.840 16.687 - 16.782: 99.2631% ( 5) 00:15:42.840 16.782 - 16.877: 99.2926% ( 4) 00:15:42.840 16.877 - 16.972: 99.3000% ( 1) 00:15:42.840 16.972 - 17.067: 99.3295% ( 4) 00:15:42.840 17.067 - 17.161: 99.3442% ( 2) 00:15:42.840 17.161 - 17.256: 99.3589% ( 2) 00:15:42.840 17.256 - 17.351: 99.3663% ( 1) 00:15:42.840 17.446 - 17.541: 99.3737% ( 1) 00:15:42.840 17.541 - 17.636: 99.3810% ( 1) 00:15:42.840 17.730 - 17.825: 99.3884% ( 1) 00:15:42.840 18.015 - 18.110: 99.4031% ( 2) 00:15:42.840 18.394 - 18.489: 99.4179% ( 2) 00:15:42.840 3592.344 - 3616.616: 99.4252% ( 1) 00:15:42.840 3980.705 - 4004.978: 99.8674% ( 60) 00:15:42.840 4004.978 - 4029.250: 100.0000% ( 18) 00:15:42.840 00:15:42.840 11:01:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:42.840 11:01:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.840 11:01:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.840 11:01:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:42.840 11:01:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.840 [ 00:15:42.840 { 00:15:42.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.840 "subtype": "Discovery", 00:15:42.840 "listen_addresses": [], 00:15:42.840 "allow_any_host": true, 00:15:42.840 "hosts": [] 00:15:42.840 }, 00:15:42.840 { 00:15:42.840 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.840 "subtype": "NVMe", 00:15:42.841 "listen_addresses": [ 00:15:42.841 { 00:15:42.841 "trtype": "VFIOUSER", 00:15:42.841 "adrfam": "IPv4", 00:15:42.841 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.841 "trsvcid": "0" 00:15:42.841 } 00:15:42.841 ], 00:15:42.841 "allow_any_host": true, 00:15:42.841 "hosts": [], 00:15:42.841 "serial_number": "SPDK1", 00:15:42.841 "model_number": "SPDK bdev Controller", 00:15:42.841 "max_namespaces": 32, 00:15:42.841 "min_cntlid": 1, 00:15:42.841 "max_cntlid": 65519, 00:15:42.841 "namespaces": [ 00:15:42.841 { 00:15:42.841 "nsid": 1, 00:15:42.841 "bdev_name": "Malloc1", 00:15:42.841 "name": "Malloc1", 00:15:42.841 "nguid": "61AFA3CEDDE247BF96A8098800089327", 00:15:42.841 "uuid": "61afa3ce-dde2-47bf-96a8-098800089327" 00:15:42.841 } 00:15:42.841 ] 00:15:42.841 }, 00:15:42.841 { 00:15:42.841 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.841 "subtype": "NVMe", 00:15:42.841 "listen_addresses": [ 00:15:42.841 { 00:15:42.841 "trtype": "VFIOUSER", 00:15:42.841 "adrfam": "IPv4", 00:15:42.841 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.841 "trsvcid": "0" 00:15:42.841 } 00:15:42.841 ], 00:15:42.841 "allow_any_host": true, 00:15:42.841 "hosts": [], 00:15:42.841 "serial_number": "SPDK2", 00:15:42.841 "model_number": "SPDK bdev Controller", 00:15:42.841 "max_namespaces": 32, 00:15:42.841 "min_cntlid": 1, 00:15:42.841 "max_cntlid": 65519, 00:15:42.841 "namespaces": [ 00:15:42.841 { 00:15:42.841 "nsid": 1, 00:15:42.841 "bdev_name": "Malloc2", 00:15:42.841 "name": "Malloc2", 00:15:42.841 "nguid": "4B57C0983FD541009D0F447D8B92E1BB", 00:15:42.841 "uuid": "4b57c098-3fd5-4100-9d0f-447d8b92e1bb" 00:15:42.841 } 00:15:42.841 ] 00:15:42.841 } 00:15:42.841 ] 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=212996 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:15:42.841 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:42.841 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:15:43.100 [2024-07-11 11:01:57.324217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.100 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:43.360 Malloc3 00:15:43.360 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:43.619 [2024-07-11 11:01:57.895257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.619 11:01:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.619 Asynchronous Event Request test 00:15:43.619 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.619 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.619 Registering asynchronous event callbacks... 00:15:43.619 Starting namespace attribute notice tests for all controllers... 00:15:43.619 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.619 aer_cb - Changed Namespace 00:15:43.619 Cleaning up... 00:15:43.881 [ 00:15:43.881 { 00:15:43.881 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.881 "subtype": "Discovery", 00:15:43.881 "listen_addresses": [], 00:15:43.881 "allow_any_host": true, 00:15:43.881 "hosts": [] 00:15:43.881 }, 00:15:43.881 { 00:15:43.881 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.881 "subtype": "NVMe", 00:15:43.881 "listen_addresses": [ 00:15:43.881 { 00:15:43.881 "trtype": "VFIOUSER", 00:15:43.881 "adrfam": "IPv4", 00:15:43.881 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.881 "trsvcid": "0" 00:15:43.881 } 00:15:43.881 ], 00:15:43.881 "allow_any_host": true, 00:15:43.881 "hosts": [], 00:15:43.881 "serial_number": "SPDK1", 00:15:43.881 "model_number": "SPDK bdev Controller", 00:15:43.881 "max_namespaces": 32, 00:15:43.881 "min_cntlid": 1, 00:15:43.881 "max_cntlid": 65519, 00:15:43.881 "namespaces": [ 00:15:43.881 { 00:15:43.881 "nsid": 1, 00:15:43.881 "bdev_name": "Malloc1", 00:15:43.881 "name": "Malloc1", 00:15:43.881 "nguid": "61AFA3CEDDE247BF96A8098800089327", 00:15:43.881 "uuid": "61afa3ce-dde2-47bf-96a8-098800089327" 00:15:43.881 }, 00:15:43.881 { 00:15:43.881 "nsid": 2, 00:15:43.881 "bdev_name": "Malloc3", 00:15:43.881 "name": "Malloc3", 00:15:43.881 "nguid": "75A65988C09940EDB8F29DEB46581380", 00:15:43.881 "uuid": "75a65988-c099-40ed-b8f2-9deb46581380" 00:15:43.881 } 00:15:43.881 ] 00:15:43.881 }, 00:15:43.881 { 00:15:43.881 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.881 "subtype": "NVMe", 00:15:43.881 "listen_addresses": [ 00:15:43.881 { 00:15:43.881 "trtype": "VFIOUSER", 00:15:43.881 "adrfam": "IPv4", 00:15:43.881 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.881 "trsvcid": "0" 00:15:43.881 } 00:15:43.881 ], 00:15:43.881 "allow_any_host": true, 00:15:43.881 "hosts": [], 00:15:43.881 "serial_number": "SPDK2", 00:15:43.881 "model_number": "SPDK bdev Controller", 00:15:43.881 "max_namespaces": 32, 00:15:43.881 "min_cntlid": 1, 00:15:43.881 "max_cntlid": 65519, 00:15:43.881 "namespaces": [ 00:15:43.881 { 00:15:43.881 "nsid": 1, 00:15:43.881 "bdev_name": "Malloc2", 00:15:43.881 "name": "Malloc2", 00:15:43.881 "nguid": "4B57C0983FD541009D0F447D8B92E1BB", 00:15:43.881 "uuid": "4b57c098-3fd5-4100-9d0f-447d8b92e1bb" 00:15:43.881 } 00:15:43.881 ] 00:15:43.881 } 00:15:43.881 ] 00:15:43.881 11:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 212996 00:15:43.881 11:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.881 11:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.881 11:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.881 11:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:43.881 [2024-07-11 11:01:58.186388] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:15:43.881 [2024-07-11 11:01:58.186432] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213133 ] 00:15:43.881 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.881 [2024-07-11 11:01:58.219847] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:43.881 [2024-07-11 11:01:58.230932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.881 [2024-07-11 11:01:58.230962] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f46345c8000 00:15:43.881 [2024-07-11 11:01:58.231933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.232940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.233952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.234960] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.235963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.236967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.237974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.238973] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.881 [2024-07-11 11:01:58.239987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.881 [2024-07-11 11:01:58.240009] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f463337c000 00:15:43.881 [2024-07-11 11:01:58.241141] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.881 [2024-07-11 11:01:58.257856] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:43.881 [2024-07-11 11:01:58.257892] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:43.881 [2024-07-11 11:01:58.259969] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.881 [2024-07-11 11:01:58.260025] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:43.881 [2024-07-11 11:01:58.260126] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:43.881 [2024-07-11 11:01:58.260151] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:43.882 [2024-07-11 11:01:58.260161] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:43.882 [2024-07-11 11:01:58.260977] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:43.882 [2024-07-11 11:01:58.260998] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:43.882 [2024-07-11 11:01:58.261011] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:43.882 [2024-07-11 11:01:58.261985] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.882 [2024-07-11 11:01:58.262006] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:43.882 [2024-07-11 11:01:58.262019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:43.882 [2024-07-11 11:01:58.262990] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:43.882 [2024-07-11 11:01:58.263012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:43.882 [2024-07-11 11:01:58.263997] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:43.882 [2024-07-11 11:01:58.264033] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:43.882 [2024-07-11 11:01:58.264042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:43.882 [2024-07-11 11:01:58.264053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:43.882 [2024-07-11 11:01:58.264182] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:43.882 [2024-07-11 11:01:58.264191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:43.882 [2024-07-11 11:01:58.264200] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:43.882 [2024-07-11 11:01:58.265006] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:43.882 [2024-07-11 11:01:58.266015] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:43.882 [2024-07-11 11:01:58.267021] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.882 [2024-07-11 11:01:58.268020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.882 [2024-07-11 11:01:58.268102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:43.882 [2024-07-11 11:01:58.269034] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:43.882 [2024-07-11 11:01:58.269068] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:43.882 [2024-07-11 11:01:58.269077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.269100] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:43.882 [2024-07-11 11:01:58.269113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.269133] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.882 [2024-07-11 11:01:58.269143] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.882 [2024-07-11 11:01:58.269162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.882 [2024-07-11 11:01:58.275765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:43.882 [2024-07-11 11:01:58.275789] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:43.882 [2024-07-11 11:01:58.275802] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:43.882 [2024-07-11 11:01:58.275811] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:43.882 [2024-07-11 11:01:58.275819] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:43.882 [2024-07-11 11:01:58.275827] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:43.882 [2024-07-11 11:01:58.275835] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:43.882 [2024-07-11 11:01:58.275843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.275857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.275873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:43.882 [2024-07-11 11:01:58.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:43.882 [2024-07-11 11:01:58.283792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.882 [2024-07-11 11:01:58.283807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.882 [2024-07-11 11:01:58.283819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.882 [2024-07-11 11:01:58.283831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.882 [2024-07-11 11:01:58.283840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.283855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.283885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:43.882 [2024-07-11 11:01:58.291763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:43.882 [2024-07-11 11:01:58.291782] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:43.882 [2024-07-11 11:01:58.291792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.291804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.291815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.291829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.882 [2024-07-11 11:01:58.299769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:43.882 [2024-07-11 11:01:58.299840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.299855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:43.882 [2024-07-11 11:01:58.299868] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:43.882 [2024-07-11 11:01:58.299877] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:43.882 [2024-07-11 11:01:58.299887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.306766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:44.143 [2024-07-11 11:01:58.306792] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:44.143 [2024-07-11 11:01:58.306828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.306845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.306861] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.143 [2024-07-11 11:01:58.306870] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.143 [2024-07-11 11:01:58.306881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.315763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:44.143 [2024-07-11 11:01:58.315794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.315811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.315825] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.143 [2024-07-11 11:01:58.315833] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.143 [2024-07-11 11:01:58.315843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.323765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:44.143 [2024-07-11 11:01:58.323787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323854] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:44.143 [2024-07-11 11:01:58.323861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:44.143 [2024-07-11 11:01:58.323870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:44.143 [2024-07-11 11:01:58.323897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.331764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:44.143 [2024-07-11 11:01:58.331802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.339779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:44.143 [2024-07-11 11:01:58.339804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.347766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:44.143 [2024-07-11 11:01:58.347790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:44.143 [2024-07-11 11:01:58.355763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:44.144 [2024-07-11 11:01:58.355794] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:44.144 [2024-07-11 11:01:58.355805] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:44.144 [2024-07-11 11:01:58.355811] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:44.144 [2024-07-11 11:01:58.355817] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:44.144 [2024-07-11 11:01:58.355827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:44.144 [2024-07-11 11:01:58.355838] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:44.144 [2024-07-11 11:01:58.355847] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:44.144 [2024-07-11 11:01:58.355855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:44.144 [2024-07-11 11:01:58.355866] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:44.144 [2024-07-11 11:01:58.355874] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.144 [2024-07-11 11:01:58.355883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.144 [2024-07-11 11:01:58.355895] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:44.144 [2024-07-11 11:01:58.355903] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:44.144 [2024-07-11 11:01:58.355912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:44.144 [2024-07-11 11:01:58.363765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:44.144 [2024-07-11 11:01:58.363793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:44.144 [2024-07-11 11:01:58.363825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:44.144 [2024-07-11 11:01:58.363837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:44.144 ===================================================== 00:15:44.144 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.144 ===================================================== 00:15:44.144 Controller Capabilities/Features 00:15:44.144 ================================ 00:15:44.144 Vendor ID: 4e58 00:15:44.144 Subsystem Vendor ID: 4e58 00:15:44.144 Serial Number: SPDK2 00:15:44.144 Model Number: SPDK bdev Controller 00:15:44.144 Firmware Version: 24.09 00:15:44.144 Recommended Arb Burst: 6 00:15:44.144 IEEE OUI Identifier: 8d 6b 50 00:15:44.144 Multi-path I/O 00:15:44.144 May have multiple subsystem ports: Yes 00:15:44.144 May have multiple controllers: Yes 00:15:44.144 Associated with SR-IOV VF: No 00:15:44.144 Max Data Transfer Size: 131072 00:15:44.144 Max Number of Namespaces: 32 00:15:44.144 Max Number of I/O Queues: 127 00:15:44.144 NVMe Specification Version (VS): 1.3 00:15:44.144 NVMe Specification Version (Identify): 1.3 00:15:44.144 Maximum Queue Entries: 256 00:15:44.144 Contiguous Queues Required: Yes 00:15:44.144 Arbitration Mechanisms Supported 00:15:44.144 Weighted Round Robin: Not Supported 00:15:44.144 Vendor Specific: Not Supported 00:15:44.144 Reset Timeout: 15000 ms 00:15:44.144 Doorbell Stride: 4 bytes 00:15:44.144 NVM Subsystem Reset: Not Supported 00:15:44.144 Command Sets Supported 00:15:44.144 NVM Command Set: Supported 00:15:44.144 Boot Partition: Not Supported 00:15:44.144 Memory Page Size Minimum: 4096 bytes 00:15:44.144 Memory Page Size Maximum: 4096 bytes 00:15:44.144 Persistent Memory Region: Not Supported 00:15:44.144 Optional Asynchronous Events Supported 00:15:44.144 Namespace Attribute Notices: Supported 00:15:44.144 Firmware Activation Notices: Not Supported 00:15:44.144 ANA Change Notices: Not Supported 00:15:44.144 PLE Aggregate Log Change Notices: Not Supported 00:15:44.144 LBA Status Info Alert Notices: Not Supported 00:15:44.144 EGE Aggregate Log Change Notices: Not Supported 00:15:44.144 Normal NVM Subsystem Shutdown event: Not Supported 00:15:44.144 Zone Descriptor Change Notices: Not Supported 00:15:44.144 Discovery Log Change Notices: Not Supported 00:15:44.144 Controller Attributes 00:15:44.144 128-bit Host Identifier: Supported 00:15:44.144 Non-Operational Permissive Mode: Not Supported 00:15:44.144 NVM Sets: Not Supported 00:15:44.144 Read Recovery Levels: Not Supported 00:15:44.144 Endurance Groups: Not Supported 00:15:44.144 Predictable Latency Mode: Not Supported 00:15:44.144 Traffic Based Keep ALive: Not Supported 00:15:44.144 Namespace Granularity: Not Supported 00:15:44.144 SQ Associations: Not Supported 00:15:44.144 UUID List: Not Supported 00:15:44.144 Multi-Domain Subsystem: Not Supported 00:15:44.144 Fixed Capacity Management: Not Supported 00:15:44.144 Variable Capacity Management: Not Supported 00:15:44.144 Delete Endurance Group: Not Supported 00:15:44.144 Delete NVM Set: Not Supported 00:15:44.144 Extended LBA Formats Supported: Not Supported 00:15:44.144 Flexible Data Placement Supported: Not Supported 00:15:44.144 00:15:44.144 Controller Memory Buffer Support 00:15:44.144 ================================ 00:15:44.144 Supported: No 00:15:44.144 00:15:44.144 Persistent Memory Region Support 00:15:44.144 ================================ 00:15:44.144 Supported: No 00:15:44.144 00:15:44.144 Admin Command Set Attributes 00:15:44.144 ============================ 00:15:44.144 Security Send/Receive: Not Supported 00:15:44.144 Format NVM: Not Supported 00:15:44.144 Firmware Activate/Download: Not Supported 00:15:44.144 Namespace Management: Not Supported 00:15:44.144 Device Self-Test: Not Supported 00:15:44.144 Directives: Not Supported 00:15:44.144 NVMe-MI: Not Supported 00:15:44.144 Virtualization Management: Not Supported 00:15:44.144 Doorbell Buffer Config: Not Supported 00:15:44.144 Get LBA Status Capability: Not Supported 00:15:44.144 Command & Feature Lockdown Capability: Not Supported 00:15:44.144 Abort Command Limit: 4 00:15:44.144 Async Event Request Limit: 4 00:15:44.144 Number of Firmware Slots: N/A 00:15:44.144 Firmware Slot 1 Read-Only: N/A 00:15:44.144 Firmware Activation Without Reset: N/A 00:15:44.144 Multiple Update Detection Support: N/A 00:15:44.144 Firmware Update Granularity: No Information Provided 00:15:44.144 Per-Namespace SMART Log: No 00:15:44.144 Asymmetric Namespace Access Log Page: Not Supported 00:15:44.144 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:44.144 Command Effects Log Page: Supported 00:15:44.144 Get Log Page Extended Data: Supported 00:15:44.144 Telemetry Log Pages: Not Supported 00:15:44.144 Persistent Event Log Pages: Not Supported 00:15:44.144 Supported Log Pages Log Page: May Support 00:15:44.144 Commands Supported & Effects Log Page: Not Supported 00:15:44.144 Feature Identifiers & Effects Log Page:May Support 00:15:44.144 NVMe-MI Commands & Effects Log Page: May Support 00:15:44.144 Data Area 4 for Telemetry Log: Not Supported 00:15:44.144 Error Log Page Entries Supported: 128 00:15:44.144 Keep Alive: Supported 00:15:44.144 Keep Alive Granularity: 10000 ms 00:15:44.144 00:15:44.144 NVM Command Set Attributes 00:15:44.144 ========================== 00:15:44.144 Submission Queue Entry Size 00:15:44.144 Max: 64 00:15:44.145 Min: 64 00:15:44.145 Completion Queue Entry Size 00:15:44.145 Max: 16 00:15:44.145 Min: 16 00:15:44.145 Number of Namespaces: 32 00:15:44.145 Compare Command: Supported 00:15:44.145 Write Uncorrectable Command: Not Supported 00:15:44.145 Dataset Management Command: Supported 00:15:44.145 Write Zeroes Command: Supported 00:15:44.145 Set Features Save Field: Not Supported 00:15:44.145 Reservations: Not Supported 00:15:44.145 Timestamp: Not Supported 00:15:44.145 Copy: Supported 00:15:44.145 Volatile Write Cache: Present 00:15:44.145 Atomic Write Unit (Normal): 1 00:15:44.145 Atomic Write Unit (PFail): 1 00:15:44.145 Atomic Compare & Write Unit: 1 00:15:44.145 Fused Compare & Write: Supported 00:15:44.145 Scatter-Gather List 00:15:44.145 SGL Command Set: Supported (Dword aligned) 00:15:44.145 SGL Keyed: Not Supported 00:15:44.145 SGL Bit Bucket Descriptor: Not Supported 00:15:44.145 SGL Metadata Pointer: Not Supported 00:15:44.145 Oversized SGL: Not Supported 00:15:44.145 SGL Metadata Address: Not Supported 00:15:44.145 SGL Offset: Not Supported 00:15:44.145 Transport SGL Data Block: Not Supported 00:15:44.145 Replay Protected Memory Block: Not Supported 00:15:44.145 00:15:44.145 Firmware Slot Information 00:15:44.145 ========================= 00:15:44.145 Active slot: 1 00:15:44.145 Slot 1 Firmware Revision: 24.09 00:15:44.145 00:15:44.145 00:15:44.145 Commands Supported and Effects 00:15:44.145 ============================== 00:15:44.145 Admin Commands 00:15:44.145 -------------- 00:15:44.145 Get Log Page (02h): Supported 00:15:44.145 Identify (06h): Supported 00:15:44.145 Abort (08h): Supported 00:15:44.145 Set Features (09h): Supported 00:15:44.145 Get Features (0Ah): Supported 00:15:44.145 Asynchronous Event Request (0Ch): Supported 00:15:44.145 Keep Alive (18h): Supported 00:15:44.145 I/O Commands 00:15:44.145 ------------ 00:15:44.145 Flush (00h): Supported LBA-Change 00:15:44.145 Write (01h): Supported LBA-Change 00:15:44.145 Read (02h): Supported 00:15:44.145 Compare (05h): Supported 00:15:44.145 Write Zeroes (08h): Supported LBA-Change 00:15:44.145 Dataset Management (09h): Supported LBA-Change 00:15:44.145 Copy (19h): Supported LBA-Change 00:15:44.145 00:15:44.145 Error Log 00:15:44.145 ========= 00:15:44.145 00:15:44.145 Arbitration 00:15:44.145 =========== 00:15:44.145 Arbitration Burst: 1 00:15:44.145 00:15:44.145 Power Management 00:15:44.145 ================ 00:15:44.145 Number of Power States: 1 00:15:44.145 Current Power State: Power State #0 00:15:44.145 Power State #0: 00:15:44.145 Max Power: 0.00 W 00:15:44.145 Non-Operational State: Operational 00:15:44.145 Entry Latency: Not Reported 00:15:44.145 Exit Latency: Not Reported 00:15:44.145 Relative Read Throughput: 0 00:15:44.145 Relative Read Latency: 0 00:15:44.145 Relative Write Throughput: 0 00:15:44.145 Relative Write Latency: 0 00:15:44.145 Idle Power: Not Reported 00:15:44.145 Active Power: Not Reported 00:15:44.145 Non-Operational Permissive Mode: Not Supported 00:15:44.145 00:15:44.145 Health Information 00:15:44.145 ================== 00:15:44.145 Critical Warnings: 00:15:44.145 Available Spare Space: OK 00:15:44.145 Temperature: OK 00:15:44.145 Device Reliability: OK 00:15:44.145 Read Only: No 00:15:44.145 Volatile Memory Backup: OK 00:15:44.145 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:44.145 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:44.145 Available Spare: 0% 00:15:44.145 Available Sp[2024-07-11 11:01:58.363961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:44.145 [2024-07-11 11:01:58.371766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:44.145 [2024-07-11 11:01:58.371820] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:44.145 [2024-07-11 11:01:58.371837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.145 [2024-07-11 11:01:58.371848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.145 [2024-07-11 11:01:58.371858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.145 [2024-07-11 11:01:58.371868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.145 [2024-07-11 11:01:58.375778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:44.145 [2024-07-11 11:01:58.375805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:44.145 [2024-07-11 11:01:58.375968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.145 [2024-07-11 11:01:58.376053] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:44.145 [2024-07-11 11:01:58.376084] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:44.145 [2024-07-11 11:01:58.376979] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:44.145 [2024-07-11 11:01:58.377003] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:44.145 [2024-07-11 11:01:58.377067] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:44.145 [2024-07-11 11:01:58.378237] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.145 are Threshold: 0% 00:15:44.145 Life Percentage Used: 0% 00:15:44.145 Data Units Read: 0 00:15:44.145 Data Units Written: 0 00:15:44.145 Host Read Commands: 0 00:15:44.145 Host Write Commands: 0 00:15:44.145 Controller Busy Time: 0 minutes 00:15:44.145 Power Cycles: 0 00:15:44.145 Power On Hours: 0 hours 00:15:44.145 Unsafe Shutdowns: 0 00:15:44.145 Unrecoverable Media Errors: 0 00:15:44.145 Lifetime Error Log Entries: 0 00:15:44.145 Warning Temperature Time: 0 minutes 00:15:44.145 Critical Temperature Time: 0 minutes 00:15:44.145 00:15:44.145 Number of Queues 00:15:44.145 ================ 00:15:44.145 Number of I/O Submission Queues: 127 00:15:44.145 Number of I/O Completion Queues: 127 00:15:44.145 00:15:44.145 Active Namespaces 00:15:44.145 ================= 00:15:44.145 Namespace ID:1 00:15:44.145 Error Recovery Timeout: Unlimited 00:15:44.145 Command Set Identifier: NVM (00h) 00:15:44.145 Deallocate: Supported 00:15:44.145 Deallocated/Unwritten Error: Not Supported 00:15:44.145 Deallocated Read Value: Unknown 00:15:44.145 Deallocate in Write Zeroes: Not Supported 00:15:44.145 Deallocated Guard Field: 0xFFFF 00:15:44.145 Flush: Supported 00:15:44.145 Reservation: Supported 00:15:44.145 Namespace Sharing Capabilities: Multiple Controllers 00:15:44.145 Size (in LBAs): 131072 (0GiB) 00:15:44.145 Capacity (in LBAs): 131072 (0GiB) 00:15:44.145 Utilization (in LBAs): 131072 (0GiB) 00:15:44.145 NGUID: 4B57C0983FD541009D0F447D8B92E1BB 00:15:44.145 UUID: 4b57c098-3fd5-4100-9d0f-447d8b92e1bb 00:15:44.145 Thin Provisioning: Not Supported 00:15:44.145 Per-NS Atomic Units: Yes 00:15:44.145 Atomic Boundary Size (Normal): 0 00:15:44.145 Atomic Boundary Size (PFail): 0 00:15:44.145 Atomic Boundary Offset: 0 00:15:44.145 Maximum Single Source Range Length: 65535 00:15:44.145 Maximum Copy Length: 65535 00:15:44.145 Maximum Source Range Count: 1 00:15:44.145 NGUID/EUI64 Never Reused: No 00:15:44.145 Namespace Write Protected: No 00:15:44.145 Number of LBA Formats: 1 00:15:44.145 Current LBA Format: LBA Format #00 00:15:44.145 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:44.145 00:15:44.145 11:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:44.145 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.405 [2024-07-11 11:01:58.604457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.679 Initializing NVMe Controllers 00:15:49.679 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.679 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:49.679 Initialization complete. Launching workers. 00:15:49.679 ======================================================== 00:15:49.679 Latency(us) 00:15:49.679 Device Information : IOPS MiB/s Average min max 00:15:49.679 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34546.53 134.95 3704.69 1186.29 7648.08 00:15:49.679 ======================================================== 00:15:49.679 Total : 34546.53 134.95 3704.69 1186.29 7648.08 00:15:49.679 00:15:49.679 [2024-07-11 11:02:03.709116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.679 11:02:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:49.679 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.679 [2024-07-11 11:02:03.950820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.952 Initializing NVMe Controllers 00:15:54.952 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:54.952 Initialization complete. Launching workers. 00:15:54.952 ======================================================== 00:15:54.952 Latency(us) 00:15:54.952 Device Information : IOPS MiB/s Average min max 00:15:54.952 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32235.76 125.92 3969.71 1221.20 9708.89 00:15:54.952 ======================================================== 00:15:54.952 Total : 32235.76 125.92 3969.71 1221.20 9708.89 00:15:54.952 00:15:54.952 [2024-07-11 11:02:08.970763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.952 11:02:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:54.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.952 [2024-07-11 11:02:09.173478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.231 [2024-07-11 11:02:14.308914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.231 Initializing NVMe Controllers 00:16:00.231 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.231 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.231 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:00.231 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:00.231 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:00.231 Initialization complete. Launching workers. 00:16:00.231 Starting thread on core 2 00:16:00.231 Starting thread on core 3 00:16:00.231 Starting thread on core 1 00:16:00.231 11:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:00.231 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.231 [2024-07-11 11:02:14.610825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.525 [2024-07-11 11:02:17.662066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.525 Initializing NVMe Controllers 00:16:03.525 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.525 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:03.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:03.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:03.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:03.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:03.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:03.525 Initialization complete. Launching workers. 00:16:03.525 Starting thread on core 1 with urgent priority queue 00:16:03.525 Starting thread on core 2 with urgent priority queue 00:16:03.525 Starting thread on core 3 with urgent priority queue 00:16:03.525 Starting thread on core 0 with urgent priority queue 00:16:03.525 SPDK bdev Controller (SPDK2 ) core 0: 1733.67 IO/s 57.68 secs/100000 ios 00:16:03.525 SPDK bdev Controller (SPDK2 ) core 1: 1992.67 IO/s 50.18 secs/100000 ios 00:16:03.525 SPDK bdev Controller (SPDK2 ) core 2: 1974.67 IO/s 50.64 secs/100000 ios 00:16:03.525 SPDK bdev Controller (SPDK2 ) core 3: 2054.00 IO/s 48.69 secs/100000 ios 00:16:03.525 ======================================================== 00:16:03.525 00:16:03.525 11:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:03.525 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.784 [2024-07-11 11:02:17.963269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.784 Initializing NVMe Controllers 00:16:03.784 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.784 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.784 Namespace ID: 1 size: 0GB 00:16:03.784 Initialization complete. 00:16:03.784 INFO: using host memory buffer for IO 00:16:03.784 Hello world! 00:16:03.784 [2024-07-11 11:02:17.975356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.785 11:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:03.785 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.043 [2024-07-11 11:02:18.264050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.983 Initializing NVMe Controllers 00:16:04.983 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.983 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.983 Initialization complete. Launching workers. 00:16:04.983 submit (in ns) avg, min, max = 8807.3, 3507.8, 4022257.8 00:16:04.983 complete (in ns) avg, min, max = 24963.5, 2070.0, 4021732.2 00:16:04.983 00:16:04.983 Submit histogram 00:16:04.983 ================ 00:16:04.983 Range in us Cumulative Count 00:16:04.983 3.484 - 3.508: 0.0076% ( 1) 00:16:04.983 3.508 - 3.532: 0.2575% ( 33) 00:16:04.983 3.532 - 3.556: 0.9239% ( 88) 00:16:04.983 3.556 - 3.579: 2.7641% ( 243) 00:16:04.983 3.579 - 3.603: 6.7853% ( 531) 00:16:04.983 3.603 - 3.627: 13.2980% ( 860) 00:16:04.983 3.627 - 3.650: 22.2567% ( 1183) 00:16:04.983 3.650 - 3.674: 33.0935% ( 1431) 00:16:04.983 3.674 - 3.698: 41.7796% ( 1147) 00:16:04.983 3.698 - 3.721: 50.7081% ( 1179) 00:16:04.983 3.721 - 3.745: 56.2287% ( 729) 00:16:04.983 3.745 - 3.769: 60.7421% ( 596) 00:16:04.983 3.769 - 3.793: 64.7633% ( 531) 00:16:04.983 3.793 - 3.816: 68.7088% ( 521) 00:16:04.983 3.816 - 3.840: 72.0712% ( 444) 00:16:04.983 3.840 - 3.864: 75.0398% ( 392) 00:16:04.983 3.864 - 3.887: 78.3188% ( 433) 00:16:04.983 3.887 - 3.911: 81.7266% ( 450) 00:16:04.983 3.911 - 3.935: 84.8315% ( 410) 00:16:04.983 3.935 - 3.959: 86.8383% ( 265) 00:16:04.983 3.959 - 3.982: 88.7391% ( 251) 00:16:04.983 3.982 - 4.006: 90.5263% ( 236) 00:16:04.983 4.006 - 4.030: 91.9500% ( 188) 00:16:04.983 4.030 - 4.053: 93.0481% ( 145) 00:16:04.983 4.053 - 4.077: 93.9947% ( 125) 00:16:04.983 4.077 - 4.101: 94.8050% ( 107) 00:16:04.983 4.101 - 4.124: 95.4335% ( 83) 00:16:04.983 4.124 - 4.148: 95.7970% ( 48) 00:16:04.983 4.148 - 4.172: 96.0091% ( 28) 00:16:04.983 4.172 - 4.196: 96.1833% ( 23) 00:16:04.983 4.196 - 4.219: 96.3650% ( 24) 00:16:04.983 4.219 - 4.243: 96.5165% ( 20) 00:16:04.983 4.243 - 4.267: 96.6149% ( 13) 00:16:04.983 4.267 - 4.290: 96.7134% ( 13) 00:16:04.983 4.290 - 4.314: 96.7967% ( 11) 00:16:04.983 4.314 - 4.338: 96.8875% ( 12) 00:16:04.983 4.338 - 4.361: 96.9254% ( 5) 00:16:04.983 4.361 - 4.385: 97.0011% ( 10) 00:16:04.983 4.385 - 4.409: 97.0239% ( 3) 00:16:04.983 4.409 - 4.433: 97.0541% ( 4) 00:16:04.983 4.433 - 4.456: 97.0617% ( 1) 00:16:04.983 4.456 - 4.480: 97.0769% ( 2) 00:16:04.983 4.480 - 4.504: 97.0844% ( 1) 00:16:04.983 4.622 - 4.646: 97.0920% ( 1) 00:16:04.983 4.646 - 4.670: 97.1147% ( 3) 00:16:04.983 4.693 - 4.717: 97.1299% ( 2) 00:16:04.983 4.717 - 4.741: 97.1829% ( 7) 00:16:04.983 4.741 - 4.764: 97.2435% ( 8) 00:16:04.983 4.764 - 4.788: 97.2586% ( 2) 00:16:04.983 4.788 - 4.812: 97.2738% ( 2) 00:16:04.983 4.812 - 4.836: 97.3419% ( 9) 00:16:04.983 4.836 - 4.859: 97.3949% ( 7) 00:16:04.983 4.859 - 4.883: 97.4404% ( 6) 00:16:04.983 4.883 - 4.907: 97.4934% ( 7) 00:16:04.983 4.907 - 4.930: 97.5464% ( 7) 00:16:04.983 4.930 - 4.954: 97.5767% ( 4) 00:16:04.983 4.954 - 4.978: 97.6070% ( 4) 00:16:04.983 4.978 - 5.001: 97.6600% ( 7) 00:16:04.983 5.001 - 5.025: 97.7357% ( 10) 00:16:04.983 5.025 - 5.049: 97.7584% ( 3) 00:16:04.983 5.049 - 5.073: 97.7887% ( 4) 00:16:04.983 5.073 - 5.096: 97.8114% ( 3) 00:16:04.983 5.096 - 5.120: 97.8266% ( 2) 00:16:04.983 5.120 - 5.144: 97.8569% ( 4) 00:16:04.983 5.144 - 5.167: 97.8796% ( 3) 00:16:04.983 5.167 - 5.191: 97.9023% ( 3) 00:16:04.983 5.191 - 5.215: 97.9175% ( 2) 00:16:04.983 5.239 - 5.262: 97.9250% ( 1) 00:16:04.983 5.262 - 5.286: 97.9402% ( 2) 00:16:04.983 5.286 - 5.310: 97.9553% ( 2) 00:16:04.983 5.333 - 5.357: 97.9629% ( 1) 00:16:04.983 5.357 - 5.381: 97.9705% ( 1) 00:16:04.983 5.381 - 5.404: 97.9780% ( 1) 00:16:04.983 5.428 - 5.452: 97.9932% ( 2) 00:16:04.983 5.452 - 5.476: 98.0008% ( 1) 00:16:04.983 5.476 - 5.499: 98.0083% ( 1) 00:16:04.983 5.618 - 5.641: 98.0310% ( 3) 00:16:04.983 5.641 - 5.665: 98.0462% ( 2) 00:16:04.983 5.689 - 5.713: 98.0538% ( 1) 00:16:04.983 5.902 - 5.926: 98.0613% ( 1) 00:16:04.983 6.021 - 6.044: 98.0689% ( 1) 00:16:04.983 6.116 - 6.163: 98.0765% ( 1) 00:16:04.983 6.163 - 6.210: 98.0841% ( 1) 00:16:04.983 6.258 - 6.305: 98.0916% ( 1) 00:16:04.983 6.447 - 6.495: 98.1068% ( 2) 00:16:04.983 6.590 - 6.637: 98.1144% ( 1) 00:16:04.983 6.637 - 6.684: 98.1295% ( 2) 00:16:04.983 6.684 - 6.732: 98.1371% ( 1) 00:16:04.983 6.732 - 6.779: 98.1446% ( 1) 00:16:04.983 6.827 - 6.874: 98.1522% ( 1) 00:16:04.983 7.016 - 7.064: 98.1598% ( 1) 00:16:04.983 7.064 - 7.111: 98.1749% ( 2) 00:16:04.983 7.111 - 7.159: 98.1901% ( 2) 00:16:04.983 7.159 - 7.206: 98.1977% ( 1) 00:16:04.983 7.206 - 7.253: 98.2052% ( 1) 00:16:04.983 7.253 - 7.301: 98.2128% ( 1) 00:16:04.983 7.301 - 7.348: 98.2279% ( 2) 00:16:04.983 7.396 - 7.443: 98.2355% ( 1) 00:16:04.983 7.538 - 7.585: 98.2507% ( 2) 00:16:04.983 7.585 - 7.633: 98.2582% ( 1) 00:16:04.983 7.633 - 7.680: 98.2658% ( 1) 00:16:04.983 7.680 - 7.727: 98.2734% ( 1) 00:16:04.983 7.775 - 7.822: 98.2885% ( 2) 00:16:04.983 8.012 - 8.059: 98.3188% ( 4) 00:16:04.983 8.059 - 8.107: 98.3340% ( 2) 00:16:04.983 8.201 - 8.249: 98.3491% ( 2) 00:16:04.983 8.249 - 8.296: 98.3643% ( 2) 00:16:04.983 8.296 - 8.344: 98.3870% ( 3) 00:16:04.983 8.344 - 8.391: 98.4021% ( 2) 00:16:04.983 8.391 - 8.439: 98.4097% ( 1) 00:16:04.983 8.439 - 8.486: 98.4248% ( 2) 00:16:04.983 8.486 - 8.533: 98.4324% ( 1) 00:16:04.983 8.533 - 8.581: 98.4400% ( 1) 00:16:04.983 8.676 - 8.723: 98.4476% ( 1) 00:16:04.983 8.723 - 8.770: 98.4627% ( 2) 00:16:04.983 8.818 - 8.865: 98.4703% ( 1) 00:16:04.983 8.865 - 8.913: 98.4778% ( 1) 00:16:04.983 8.960 - 9.007: 98.4930% ( 2) 00:16:04.983 9.007 - 9.055: 98.5006% ( 1) 00:16:04.983 9.197 - 9.244: 98.5081% ( 1) 00:16:04.983 9.387 - 9.434: 98.5157% ( 1) 00:16:04.983 9.434 - 9.481: 98.5233% ( 1) 00:16:04.983 9.529 - 9.576: 98.5309% ( 1) 00:16:04.983 9.576 - 9.624: 98.5384% ( 1) 00:16:04.983 9.671 - 9.719: 98.5460% ( 1) 00:16:04.983 9.813 - 9.861: 98.5612% ( 2) 00:16:04.983 9.861 - 9.908: 98.5763% ( 2) 00:16:04.983 10.098 - 10.145: 98.5839% ( 1) 00:16:04.983 10.193 - 10.240: 98.5990% ( 2) 00:16:04.983 10.619 - 10.667: 98.6066% ( 1) 00:16:04.983 10.667 - 10.714: 98.6142% ( 1) 00:16:04.983 10.714 - 10.761: 98.6217% ( 1) 00:16:04.983 10.761 - 10.809: 98.6293% ( 1) 00:16:04.983 10.856 - 10.904: 98.6369% ( 1) 00:16:04.983 10.951 - 10.999: 98.6520% ( 2) 00:16:04.983 11.046 - 11.093: 98.6596% ( 1) 00:16:04.983 11.093 - 11.141: 98.6672% ( 1) 00:16:04.983 11.188 - 11.236: 98.6747% ( 1) 00:16:04.984 11.283 - 11.330: 98.6823% ( 1) 00:16:04.984 11.330 - 11.378: 98.6899% ( 1) 00:16:04.984 11.425 - 11.473: 98.6975% ( 1) 00:16:04.984 11.473 - 11.520: 98.7050% ( 1) 00:16:04.984 11.520 - 11.567: 98.7202% ( 2) 00:16:04.984 11.662 - 11.710: 98.7278% ( 1) 00:16:04.984 11.710 - 11.757: 98.7580% ( 4) 00:16:04.984 11.757 - 11.804: 98.7656% ( 1) 00:16:04.984 11.899 - 11.947: 98.7732% ( 1) 00:16:04.984 11.947 - 11.994: 98.7883% ( 2) 00:16:04.984 12.089 - 12.136: 98.7959% ( 1) 00:16:04.984 12.231 - 12.326: 98.8111% ( 2) 00:16:04.984 12.421 - 12.516: 98.8186% ( 1) 00:16:04.984 12.516 - 12.610: 98.8262% ( 1) 00:16:04.984 12.610 - 12.705: 98.8338% ( 1) 00:16:04.984 12.705 - 12.800: 98.8565% ( 3) 00:16:04.984 12.990 - 13.084: 98.8641% ( 1) 00:16:04.984 13.179 - 13.274: 98.8716% ( 1) 00:16:04.984 13.274 - 13.369: 98.8868% ( 2) 00:16:04.984 13.369 - 13.464: 98.8944% ( 1) 00:16:04.984 13.938 - 14.033: 98.9171% ( 3) 00:16:04.984 14.033 - 14.127: 98.9246% ( 1) 00:16:04.984 14.222 - 14.317: 98.9322% ( 1) 00:16:04.984 14.507 - 14.601: 98.9398% ( 1) 00:16:04.984 14.601 - 14.696: 98.9474% ( 1) 00:16:04.984 14.696 - 14.791: 98.9549% ( 1) 00:16:04.984 14.791 - 14.886: 98.9701% ( 2) 00:16:04.984 14.886 - 14.981: 98.9777% ( 1) 00:16:04.984 15.265 - 15.360: 98.9928% ( 2) 00:16:04.984 15.360 - 15.455: 99.0004% ( 1) 00:16:04.984 15.550 - 15.644: 99.0080% ( 1) 00:16:04.984 17.067 - 17.161: 99.0155% ( 1) 00:16:04.984 17.256 - 17.351: 99.0231% ( 1) 00:16:04.984 17.351 - 17.446: 99.0382% ( 2) 00:16:04.984 17.446 - 17.541: 99.0685% ( 4) 00:16:04.984 17.541 - 17.636: 99.0988% ( 4) 00:16:04.984 17.636 - 17.730: 99.1064% ( 1) 00:16:04.984 17.730 - 17.825: 99.1367% ( 4) 00:16:04.984 17.825 - 17.920: 99.2048% ( 9) 00:16:04.984 17.920 - 18.015: 99.2806% ( 10) 00:16:04.984 18.015 - 18.110: 99.3336% ( 7) 00:16:04.984 18.110 - 18.204: 99.3942% ( 8) 00:16:04.984 18.204 - 18.299: 99.4623% ( 9) 00:16:04.984 18.299 - 18.394: 99.5229% ( 8) 00:16:04.984 18.394 - 18.489: 99.6062% ( 11) 00:16:04.984 18.489 - 18.584: 99.6441% ( 5) 00:16:04.984 18.584 - 18.679: 99.6819% ( 5) 00:16:04.984 18.679 - 18.773: 99.7274% ( 6) 00:16:04.984 18.773 - 18.868: 99.7728% ( 6) 00:16:04.984 18.868 - 18.963: 99.7955% ( 3) 00:16:04.984 19.058 - 19.153: 99.8107% ( 2) 00:16:04.984 19.153 - 19.247: 99.8183% ( 1) 00:16:04.984 19.247 - 19.342: 99.8258% ( 1) 00:16:04.984 19.342 - 19.437: 99.8334% ( 1) 00:16:04.984 19.437 - 19.532: 99.8410% ( 1) 00:16:04.984 20.859 - 20.954: 99.8485% ( 1) 00:16:04.984 21.428 - 21.523: 99.8561% ( 1) 00:16:04.984 22.092 - 22.187: 99.8637% ( 1) 00:16:04.984 23.514 - 23.609: 99.8713% ( 1) 00:16:04.984 30.341 - 30.530: 99.8788% ( 1) 00:16:04.984 3980.705 - 4004.978: 99.9773% ( 13) 00:16:04.984 4004.978 - 4029.250: 100.0000% ( 3) 00:16:04.984 00:16:04.984 Complete histogram 00:16:04.984 ================== 00:16:04.984 Range in us Cumulative Count 00:16:04.984 2.062 - 2.074: 0.0530% ( 7) 00:16:04.984 2.074 - 2.086: 20.8330% ( 2744) 00:16:04.984 2.086 - 2.098: 40.1515% ( 2551) 00:16:04.984 2.098 - 2.110: 42.4839% ( 308) 00:16:04.984 2.110 - 2.121: 55.8046% ( 1759) 00:16:04.984 2.121 - 2.133: 61.4161% ( 741) 00:16:04.984 2.133 - 2.145: 63.9379% ( 333) 00:16:04.984 2.145 - 2.157: 74.9640% ( 1456) 00:16:04.984 2.157 - 2.169: 79.4548% ( 593) 00:16:04.984 2.169 - 2.181: 81.1587% ( 225) 00:16:04.984 2.181 - 2.193: 86.3006% ( 679) 00:16:04.984 2.193 - 2.204: 88.3453% ( 270) 00:16:04.984 2.204 - 2.216: 89.1708% ( 109) 00:16:04.984 2.216 - 2.228: 90.5187% ( 178) 00:16:04.984 2.228 - 2.240: 92.0106% ( 197) 00:16:04.984 2.240 - 2.252: 93.5933% ( 209) 00:16:04.984 2.252 - 2.264: 94.3582% ( 101) 00:16:04.984 2.264 - 2.276: 94.6914% ( 44) 00:16:04.984 2.276 - 2.287: 94.8580% ( 22) 00:16:04.984 2.287 - 2.299: 94.9413% ( 11) 00:16:04.984 2.299 - 2.311: 95.1003% ( 21) 00:16:04.984 2.311 - 2.323: 95.4563% ( 47) 00:16:04.984 2.323 - 2.335: 95.5699% ( 15) 00:16:04.984 2.335 - 2.347: 95.6304% ( 8) 00:16:04.984 2.347 - 2.359: 95.7289% ( 13) 00:16:04.984 2.359 - 2.370: 95.9409% ( 28) 00:16:04.984 2.370 - 2.382: 96.1303% ( 25) 00:16:04.984 2.382 - 2.394: 96.4256% ( 39) 00:16:04.984 2.394 - 2.406: 96.9027% ( 63) 00:16:04.984 2.406 - 2.418: 97.1829% ( 37) 00:16:04.984 2.418 - 2.430: 97.4782% ( 39) 00:16:04.984 2.430 - 2.441: 97.7130% ( 31) 00:16:04.984 2.441 - 2.453: 97.8266% ( 15) 00:16:04.984 2.453 - 2.465: 97.8796% ( 7) 00:16:04.984 2.465 - 2.477: 98.0159% ( 18) 00:16:04.984 2.477 - 2.489: 98.0765% ( 8) 00:16:04.984 2.489 - 2.501: 98.0916% ( 2) 00:16:04.984 2.501 - 2.513: 98.1144% ( 3) 00:16:04.984 2.524 - 2.536: 98.1295% ( 2) 00:16:04.984 2.536 - 2.548: 98.1446% ( 2) 00:16:04.984 2.548 - 2.560: 98.1598% ( 2) 00:16:04.984 2.560 - 2.572: 98.1749% ( 2) 00:16:04.984 2.584 - 2.596: 98.1825% ( 1) 00:16:04.984 2.596 - 2.607: 98.1977% ( 2) 00:16:04.984 2.607 - 2.619: 98.2052% ( 1) 00:16:04.984 2.643 - 2.655: 98.2204% ( 2) 00:16:04.984 2.655 - 2.667: 98.2279% ( 1) 00:16:04.984 2.667 - 2.679: 98.2431% ( 2) 00:16:04.984 2.679 - 2.690: 98.2582% ( 2) 00:16:04.984 2.690 - 2.702: 98.2734% ( 2) 00:16:04.984 2.702 - 2.714: 98.3037% ( 4) 00:16:04.984 2.726 - 2.738: 9[2024-07-11 11:02:19.362641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.984 8.3112% ( 1) 00:16:04.984 2.750 - 2.761: 98.3188% ( 1) 00:16:04.984 2.761 - 2.773: 98.3264% ( 1) 00:16:04.984 2.773 - 2.785: 98.3340% ( 1) 00:16:04.984 2.785 - 2.797: 98.3415% ( 1) 00:16:04.984 2.821 - 2.833: 98.3491% ( 1) 00:16:04.984 2.892 - 2.904: 98.3567% ( 1) 00:16:04.984 2.951 - 2.963: 98.3643% ( 1) 00:16:04.984 3.010 - 3.022: 98.3794% ( 2) 00:16:04.984 3.129 - 3.153: 98.3870% ( 1) 00:16:04.984 3.532 - 3.556: 98.4173% ( 4) 00:16:04.984 3.579 - 3.603: 98.4248% ( 1) 00:16:04.984 3.603 - 3.627: 98.4324% ( 1) 00:16:04.984 3.627 - 3.650: 98.4400% ( 1) 00:16:04.984 3.650 - 3.674: 98.4476% ( 1) 00:16:04.984 3.698 - 3.721: 98.4551% ( 1) 00:16:04.984 3.721 - 3.745: 98.4627% ( 1) 00:16:04.984 3.745 - 3.769: 98.4778% ( 2) 00:16:04.984 3.793 - 3.816: 98.4854% ( 1) 00:16:04.984 3.816 - 3.840: 98.5006% ( 2) 00:16:04.984 3.840 - 3.864: 98.5309% ( 4) 00:16:04.984 3.911 - 3.935: 98.5460% ( 2) 00:16:04.984 3.959 - 3.982: 98.5612% ( 2) 00:16:04.984 3.982 - 4.006: 98.5687% ( 1) 00:16:04.984 4.053 - 4.077: 98.5839% ( 2) 00:16:04.984 4.101 - 4.124: 98.5914% ( 1) 00:16:04.984 4.124 - 4.148: 98.5990% ( 1) 00:16:04.984 4.219 - 4.243: 98.6142% ( 2) 00:16:04.984 5.404 - 5.428: 98.6217% ( 1) 00:16:04.984 5.499 - 5.523: 98.6293% ( 1) 00:16:04.984 5.618 - 5.641: 98.6369% ( 1) 00:16:04.984 5.736 - 5.760: 98.6445% ( 1) 00:16:04.984 5.807 - 5.831: 98.6520% ( 1) 00:16:04.984 6.116 - 6.163: 98.6596% ( 1) 00:16:04.984 6.353 - 6.400: 98.6747% ( 2) 00:16:04.984 6.542 - 6.590: 98.6899% ( 2) 00:16:04.984 6.684 - 6.732: 98.6975% ( 1) 00:16:04.984 6.827 - 6.874: 98.7050% ( 1) 00:16:04.984 7.064 - 7.111: 98.7126% ( 1) 00:16:04.984 7.301 - 7.348: 98.7202% ( 1) 00:16:04.984 15.360 - 15.455: 98.7353% ( 2) 00:16:04.984 15.550 - 15.644: 98.7429% ( 1) 00:16:04.984 15.644 - 15.739: 98.7505% ( 1) 00:16:04.984 15.739 - 15.834: 98.7656% ( 2) 00:16:04.984 15.834 - 15.929: 98.7808% ( 2) 00:16:04.984 15.929 - 16.024: 98.8262% ( 6) 00:16:04.984 16.024 - 16.119: 98.8792% ( 7) 00:16:04.984 16.119 - 16.213: 98.9019% ( 3) 00:16:04.984 16.213 - 16.308: 98.9625% ( 8) 00:16:04.984 16.308 - 16.403: 99.0458% ( 11) 00:16:04.984 16.403 - 16.498: 99.0685% ( 3) 00:16:04.984 16.498 - 16.593: 99.1215% ( 7) 00:16:04.984 16.593 - 16.687: 99.1821% ( 8) 00:16:04.984 16.687 - 16.782: 99.2276% ( 6) 00:16:04.984 16.782 - 16.877: 99.2654% ( 5) 00:16:04.984 16.877 - 16.972: 99.2957% ( 4) 00:16:04.984 16.972 - 17.067: 99.3487% ( 7) 00:16:04.984 17.067 - 17.161: 99.3715% ( 3) 00:16:04.984 17.161 - 17.256: 99.3790% ( 1) 00:16:04.984 17.256 - 17.351: 99.3866% ( 1) 00:16:04.984 17.446 - 17.541: 99.4017% ( 2) 00:16:04.984 17.730 - 17.825: 99.4093% ( 1) 00:16:04.984 18.015 - 18.110: 99.4169% ( 1) 00:16:04.984 18.299 - 18.394: 99.4245% ( 1) 00:16:04.984 18.394 - 18.489: 99.4320% ( 1) 00:16:04.984 3980.705 - 4004.978: 99.9167% ( 64) 00:16:04.985 4004.978 - 4029.250: 100.0000% ( 11) 00:16:04.985 00:16:04.985 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:04.985 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.985 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.985 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:04.985 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.245 [ 00:16:05.245 { 00:16:05.245 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.245 "subtype": "Discovery", 00:16:05.245 "listen_addresses": [], 00:16:05.245 "allow_any_host": true, 00:16:05.245 "hosts": [] 00:16:05.245 }, 00:16:05.245 { 00:16:05.245 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.245 "subtype": "NVMe", 00:16:05.245 "listen_addresses": [ 00:16:05.245 { 00:16:05.245 "trtype": "VFIOUSER", 00:16:05.245 "adrfam": "IPv4", 00:16:05.245 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.245 "trsvcid": "0" 00:16:05.245 } 00:16:05.245 ], 00:16:05.245 "allow_any_host": true, 00:16:05.245 "hosts": [], 00:16:05.245 "serial_number": "SPDK1", 00:16:05.245 "model_number": "SPDK bdev Controller", 00:16:05.245 "max_namespaces": 32, 00:16:05.245 "min_cntlid": 1, 00:16:05.245 "max_cntlid": 65519, 00:16:05.245 "namespaces": [ 00:16:05.245 { 00:16:05.245 "nsid": 1, 00:16:05.245 "bdev_name": "Malloc1", 00:16:05.245 "name": "Malloc1", 00:16:05.245 "nguid": "61AFA3CEDDE247BF96A8098800089327", 00:16:05.245 "uuid": "61afa3ce-dde2-47bf-96a8-098800089327" 00:16:05.245 }, 00:16:05.245 { 00:16:05.245 "nsid": 2, 00:16:05.245 "bdev_name": "Malloc3", 00:16:05.245 "name": "Malloc3", 00:16:05.245 "nguid": "75A65988C09940EDB8F29DEB46581380", 00:16:05.245 "uuid": "75a65988-c099-40ed-b8f2-9deb46581380" 00:16:05.245 } 00:16:05.245 ] 00:16:05.245 }, 00:16:05.245 { 00:16:05.245 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.245 "subtype": "NVMe", 00:16:05.245 "listen_addresses": [ 00:16:05.245 { 00:16:05.245 "trtype": "VFIOUSER", 00:16:05.245 "adrfam": "IPv4", 00:16:05.245 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.245 "trsvcid": "0" 00:16:05.245 } 00:16:05.245 ], 00:16:05.245 "allow_any_host": true, 00:16:05.245 "hosts": [], 00:16:05.245 "serial_number": "SPDK2", 00:16:05.245 "model_number": "SPDK bdev Controller", 00:16:05.245 "max_namespaces": 32, 00:16:05.245 "min_cntlid": 1, 00:16:05.245 "max_cntlid": 65519, 00:16:05.245 "namespaces": [ 00:16:05.245 { 00:16:05.245 "nsid": 1, 00:16:05.245 "bdev_name": "Malloc2", 00:16:05.245 "name": "Malloc2", 00:16:05.245 "nguid": "4B57C0983FD541009D0F447D8B92E1BB", 00:16:05.245 "uuid": "4b57c098-3fd5-4100-9d0f-447d8b92e1bb" 00:16:05.245 } 00:16:05.245 ] 00:16:05.245 } 00:16:05.245 ] 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=215651 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:05.504 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:05.504 [2024-07-11 11:02:19.840315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:05.504 11:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:05.762 Malloc4 00:16:05.762 11:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:06.020 [2024-07-11 11:02:20.398400] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.020 11:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.020 Asynchronous Event Request test 00:16:06.020 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.020 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.020 Registering asynchronous event callbacks... 00:16:06.020 Starting namespace attribute notice tests for all controllers... 00:16:06.020 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:06.020 aer_cb - Changed Namespace 00:16:06.020 Cleaning up... 00:16:06.279 [ 00:16:06.279 { 00:16:06.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.279 "subtype": "Discovery", 00:16:06.279 "listen_addresses": [], 00:16:06.279 "allow_any_host": true, 00:16:06.279 "hosts": [] 00:16:06.279 }, 00:16:06.279 { 00:16:06.279 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.279 "subtype": "NVMe", 00:16:06.279 "listen_addresses": [ 00:16:06.279 { 00:16:06.279 "trtype": "VFIOUSER", 00:16:06.279 "adrfam": "IPv4", 00:16:06.279 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.279 "trsvcid": "0" 00:16:06.279 } 00:16:06.279 ], 00:16:06.279 "allow_any_host": true, 00:16:06.279 "hosts": [], 00:16:06.279 "serial_number": "SPDK1", 00:16:06.279 "model_number": "SPDK bdev Controller", 00:16:06.279 "max_namespaces": 32, 00:16:06.279 "min_cntlid": 1, 00:16:06.279 "max_cntlid": 65519, 00:16:06.279 "namespaces": [ 00:16:06.279 { 00:16:06.279 "nsid": 1, 00:16:06.279 "bdev_name": "Malloc1", 00:16:06.279 "name": "Malloc1", 00:16:06.279 "nguid": "61AFA3CEDDE247BF96A8098800089327", 00:16:06.279 "uuid": "61afa3ce-dde2-47bf-96a8-098800089327" 00:16:06.279 }, 00:16:06.279 { 00:16:06.279 "nsid": 2, 00:16:06.279 "bdev_name": "Malloc3", 00:16:06.279 "name": "Malloc3", 00:16:06.279 "nguid": "75A65988C09940EDB8F29DEB46581380", 00:16:06.279 "uuid": "75a65988-c099-40ed-b8f2-9deb46581380" 00:16:06.279 } 00:16:06.279 ] 00:16:06.279 }, 00:16:06.279 { 00:16:06.279 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.279 "subtype": "NVMe", 00:16:06.279 "listen_addresses": [ 00:16:06.279 { 00:16:06.279 "trtype": "VFIOUSER", 00:16:06.279 "adrfam": "IPv4", 00:16:06.279 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.279 "trsvcid": "0" 00:16:06.279 } 00:16:06.279 ], 00:16:06.279 "allow_any_host": true, 00:16:06.279 "hosts": [], 00:16:06.279 "serial_number": "SPDK2", 00:16:06.279 "model_number": "SPDK bdev Controller", 00:16:06.279 "max_namespaces": 32, 00:16:06.279 "min_cntlid": 1, 00:16:06.279 "max_cntlid": 65519, 00:16:06.279 "namespaces": [ 00:16:06.279 { 00:16:06.279 "nsid": 1, 00:16:06.279 "bdev_name": "Malloc2", 00:16:06.279 "name": "Malloc2", 00:16:06.279 "nguid": "4B57C0983FD541009D0F447D8B92E1BB", 00:16:06.279 "uuid": "4b57c098-3fd5-4100-9d0f-447d8b92e1bb" 00:16:06.279 }, 00:16:06.279 { 00:16:06.279 "nsid": 2, 00:16:06.279 "bdev_name": "Malloc4", 00:16:06.279 "name": "Malloc4", 00:16:06.279 "nguid": "E0D8D9F74A1744B181222488C7BEB336", 00:16:06.279 "uuid": "e0d8d9f7-4a17-44b1-8122-2488c7beb336" 00:16:06.279 } 00:16:06.279 ] 00:16:06.279 } 00:16:06.279 ] 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 215651 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 210056 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 210056 ']' 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 210056 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 210056 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 210056' 00:16:06.279 killing process with pid 210056 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 210056 00:16:06.279 11:02:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 210056 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=215792 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 215792' 00:16:06.849 Process pid: 215792 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 215792 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 215792 ']' 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.849 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.850 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.850 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.850 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:06.850 [2024-07-11 11:02:21.084006] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:06.850 [2024-07-11 11:02:21.085053] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:16:06.850 [2024-07-11 11:02:21.085125] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.850 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.850 [2024-07-11 11:02:21.142657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.850 [2024-07-11 11:02:21.236040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.850 [2024-07-11 11:02:21.236094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.850 [2024-07-11 11:02:21.236107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.850 [2024-07-11 11:02:21.236118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.850 [2024-07-11 11:02:21.236129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.850 [2024-07-11 11:02:21.236243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.850 [2024-07-11 11:02:21.236302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.850 [2024-07-11 11:02:21.236371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.850 [2024-07-11 11:02:21.236374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.110 [2024-07-11 11:02:21.340577] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:07.110 [2024-07-11 11:02:21.340807] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:07.110 [2024-07-11 11:02:21.341081] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:07.110 [2024-07-11 11:02:21.341714] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:07.110 [2024-07-11 11:02:21.341976] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:07.110 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.110 11:02:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:07.110 11:02:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:08.046 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:08.304 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:08.304 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:08.304 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.304 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:08.304 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:08.562 Malloc1 00:16:08.562 11:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:08.821 11:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:09.079 11:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:09.336 11:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:09.336 11:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:09.336 11:02:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:09.594 Malloc2 00:16:09.852 11:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:10.110 11:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:10.110 11:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 215792 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 215792 ']' 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 215792 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.368 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 215792 00:16:10.626 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:10.626 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:10.626 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 215792' 00:16:10.626 killing process with pid 215792 00:16:10.626 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 215792 00:16:10.626 11:02:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 215792 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:10.885 00:16:10.885 real 0m52.967s 00:16:10.885 user 3m29.211s 00:16:10.885 sys 0m4.543s 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:10.885 ************************************ 00:16:10.885 END TEST nvmf_vfio_user 00:16:10.885 ************************************ 00:16:10.885 11:02:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:10.885 11:02:25 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.885 11:02:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:10.885 11:02:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.885 11:02:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.885 ************************************ 00:16:10.885 START TEST nvmf_vfio_user_nvme_compliance 00:16:10.885 ************************************ 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.885 * Looking for test storage... 00:16:10.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.885 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=216359 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 216359' 00:16:10.886 Process pid: 216359 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 216359 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 216359 ']' 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.886 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:10.886 [2024-07-11 11:02:25.247718] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:16:10.886 [2024-07-11 11:02:25.247833] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.886 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.886 [2024-07-11 11:02:25.305917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.144 [2024-07-11 11:02:25.390129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.144 [2024-07-11 11:02:25.390184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.144 [2024-07-11 11:02:25.390207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.144 [2024-07-11 11:02:25.390218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.144 [2024-07-11 11:02:25.390227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.144 [2024-07-11 11:02:25.390302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.144 [2024-07-11 11:02:25.390368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.144 [2024-07-11 11:02:25.390370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.144 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.144 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:11.144 11:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:12.081 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.341 malloc0 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.341 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.342 11:02:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:12.342 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.342 00:16:12.342 00:16:12.342 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.342 http://cunit.sourceforge.net/ 00:16:12.342 00:16:12.342 00:16:12.342 Suite: nvme_compliance 00:16:12.342 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-11 11:02:26.739159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.342 [2024-07-11 11:02:26.740635] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:12.342 [2024-07-11 11:02:26.740661] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:12.342 [2024-07-11 11:02:26.740673] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:12.342 [2024-07-11 11:02:26.742177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.600 passed 00:16:12.600 Test: admin_identify_ctrlr_verify_fused ...[2024-07-11 11:02:26.829795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.600 [2024-07-11 11:02:26.832819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.600 passed 00:16:12.600 Test: admin_identify_ns ...[2024-07-11 11:02:26.919298] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.600 [2024-07-11 11:02:26.978801] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:12.600 [2024-07-11 11:02:26.986773] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:12.600 [2024-07-11 11:02:27.007893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.858 passed 00:16:12.858 Test: admin_get_features_mandatory_features ...[2024-07-11 11:02:27.091273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.858 [2024-07-11 11:02:27.094296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.858 passed 00:16:12.858 Test: admin_get_features_optional_features ...[2024-07-11 11:02:27.177846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.858 [2024-07-11 11:02:27.181874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.858 passed 00:16:12.858 Test: admin_set_features_number_of_queues ...[2024-07-11 11:02:27.266478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.118 [2024-07-11 11:02:27.370890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.118 passed 00:16:13.118 Test: admin_get_log_page_mandatory_logs ...[2024-07-11 11:02:27.455945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.118 [2024-07-11 11:02:27.458968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.118 passed 00:16:13.118 Test: admin_get_log_page_with_lpo ...[2024-07-11 11:02:27.540144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.378 [2024-07-11 11:02:27.607770] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:13.378 [2024-07-11 11:02:27.623856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.378 passed 00:16:13.378 Test: fabric_property_get ...[2024-07-11 11:02:27.703468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.378 [2024-07-11 11:02:27.704739] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:13.378 [2024-07-11 11:02:27.706492] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.378 passed 00:16:13.378 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-11 11:02:27.792055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.378 [2024-07-11 11:02:27.793325] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:13.378 [2024-07-11 11:02:27.795090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.639 passed 00:16:13.639 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-11 11:02:27.877317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.639 [2024-07-11 11:02:27.960778] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.639 [2024-07-11 11:02:27.976760] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.639 [2024-07-11 11:02:27.981866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.639 passed 00:16:13.899 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-11 11:02:28.064082] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.899 [2024-07-11 11:02:28.065340] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:13.899 [2024-07-11 11:02:28.067085] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.899 passed 00:16:13.899 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-11 11:02:28.151393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.899 [2024-07-11 11:02:28.226764] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:13.899 [2024-07-11 11:02:28.250768] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.899 [2024-07-11 11:02:28.255867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.899 passed 00:16:14.158 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-11 11:02:28.338060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.158 [2024-07-11 11:02:28.339327] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:14.158 [2024-07-11 11:02:28.339366] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:14.158 [2024-07-11 11:02:28.341082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.158 passed 00:16:14.158 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-11 11:02:28.425322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.158 [2024-07-11 11:02:28.516783] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:14.158 [2024-07-11 11:02:28.524763] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:14.158 [2024-07-11 11:02:28.532778] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:14.158 [2024-07-11 11:02:28.540792] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:14.158 [2024-07-11 11:02:28.569875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.416 passed 00:16:14.416 Test: admin_create_io_sq_verify_pc ...[2024-07-11 11:02:28.653529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.416 [2024-07-11 11:02:28.668776] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:14.416 [2024-07-11 11:02:28.685869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.416 passed 00:16:14.416 Test: admin_create_io_qp_max_qps ...[2024-07-11 11:02:28.773460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.794 [2024-07-11 11:02:29.883771] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:16.051 [2024-07-11 11:02:30.260378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.051 passed 00:16:16.051 Test: admin_create_io_sq_shared_cq ...[2024-07-11 11:02:30.345971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.051 [2024-07-11 11:02:30.475764] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:16.308 [2024-07-11 11:02:30.512872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.308 passed 00:16:16.308 00:16:16.308 Run Summary: Type Total Ran Passed Failed Inactive 00:16:16.309 suites 1 1 n/a 0 0 00:16:16.309 tests 18 18 18 0 0 00:16:16.309 asserts 360 360 360 0 n/a 00:16:16.309 00:16:16.309 Elapsed time = 1.569 seconds 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 216359 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 216359 ']' 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 216359 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 216359 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 216359' 00:16:16.309 killing process with pid 216359 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 216359 00:16:16.309 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 216359 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:16.567 00:16:16.567 real 0m5.722s 00:16:16.567 user 0m16.116s 00:16:16.567 sys 0m0.543s 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:16.567 ************************************ 00:16:16.567 END TEST nvmf_vfio_user_nvme_compliance 00:16:16.567 ************************************ 00:16:16.567 11:02:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:16.567 11:02:30 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.567 11:02:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:16.567 11:02:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.567 11:02:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.567 ************************************ 00:16:16.567 START TEST nvmf_vfio_user_fuzz 00:16:16.567 ************************************ 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.567 * Looking for test storage... 00:16:16.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.567 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=217113 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 217113' 00:16:16.568 Process pid: 217113 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 217113 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 217113 ']' 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.568 11:02:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.138 11:02:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.138 11:02:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:17.138 11:02:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.074 malloc0 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:18.074 11:02:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:50.161 Fuzzing completed. Shutting down the fuzz application 00:16:50.161 00:16:50.161 Dumping successful admin opcodes: 00:16:50.161 8, 9, 10, 24, 00:16:50.161 Dumping successful io opcodes: 00:16:50.161 0, 00:16:50.161 NS: 0x200003a1ef00 I/O qp, Total commands completed: 643750, total successful commands: 2501, random_seed: 2526654400 00:16:50.161 NS: 0x200003a1ef00 admin qp, Total commands completed: 83022, total successful commands: 664, random_seed: 253666880 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 217113 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 217113 ']' 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 217113 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 217113 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 217113' 00:16:50.161 killing process with pid 217113 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 217113 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 217113 00:16:50.161 11:03:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:50.161 11:03:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:50.161 00:16:50.161 real 0m32.154s 00:16:50.161 user 0m29.685s 00:16:50.161 sys 0m29.169s 00:16:50.161 11:03:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.161 11:03:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.161 ************************************ 00:16:50.161 END TEST nvmf_vfio_user_fuzz 00:16:50.161 ************************************ 00:16:50.161 11:03:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.161 11:03:03 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.161 11:03:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.161 11:03:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.161 11:03:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.161 ************************************ 00:16:50.161 START TEST nvmf_host_management 00:16:50.161 ************************************ 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.161 * Looking for test storage... 00:16:50.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.161 11:03:03 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.162 11:03:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.101 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.101 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:16:51.102 00:16:51.102 --- 10.0.0.2 ping statistics --- 00:16:51.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.102 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:16:51.102 00:16:51.102 --- 10.0.0.1 ping statistics --- 00:16:51.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.102 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=222569 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 222569 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 222569 ']' 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.102 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.102 [2024-07-11 11:03:05.428362] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:16:51.102 [2024-07-11 11:03:05.428432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.102 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.102 [2024-07-11 11:03:05.493581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.362 [2024-07-11 11:03:05.578820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.363 [2024-07-11 11:03:05.578878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.363 [2024-07-11 11:03:05.578904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.363 [2024-07-11 11:03:05.578915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.363 [2024-07-11 11:03:05.578924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.363 [2024-07-11 11:03:05.579118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.363 [2024-07-11 11:03:05.579203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.363 [2024-07-11 11:03:05.579318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.363 [2024-07-11 11:03:05.579312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 [2024-07-11 11:03:05.736670] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.363 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.363 Malloc0 00:16:51.622 [2024-07-11 11:03:05.798484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=222713 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 222713 /var/tmp/bdevperf.sock 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 222713 ']' 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.622 { 00:16:51.622 "params": { 00:16:51.622 "name": "Nvme$subsystem", 00:16:51.622 "trtype": "$TEST_TRANSPORT", 00:16:51.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.622 "adrfam": "ipv4", 00:16:51.622 "trsvcid": "$NVMF_PORT", 00:16:51.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.622 "hdgst": ${hdgst:-false}, 00:16:51.622 "ddgst": ${ddgst:-false} 00:16:51.622 }, 00:16:51.622 "method": "bdev_nvme_attach_controller" 00:16:51.622 } 00:16:51.622 EOF 00:16:51.622 )") 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:51.622 11:03:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:51.622 "params": { 00:16:51.622 "name": "Nvme0", 00:16:51.622 "trtype": "tcp", 00:16:51.622 "traddr": "10.0.0.2", 00:16:51.622 "adrfam": "ipv4", 00:16:51.622 "trsvcid": "4420", 00:16:51.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:51.622 "hdgst": false, 00:16:51.622 "ddgst": false 00:16:51.622 }, 00:16:51.622 "method": "bdev_nvme_attach_controller" 00:16:51.622 }' 00:16:51.622 [2024-07-11 11:03:05.877315] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:16:51.622 [2024-07-11 11:03:05.877409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222713 ] 00:16:51.622 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.622 [2024-07-11 11:03:05.938148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.622 [2024-07-11 11:03:06.024588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.188 Running I/O for 10 seconds... 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:52.188 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.448 [2024-07-11 11:03:06.749635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047e20 is same with the state(5) to be set 00:16:52.448 [2024-07-11 11:03:06.749705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047e20 is same with the state(5) to be set 00:16:52.448 [2024-07-11 11:03:06.749722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047e20 is same with the state(5) to be set 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.448 [2024-07-11 11:03:06.758192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.448 [2024-07-11 11:03:06.758247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.758265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.448 [2024-07-11 11:03:06.758288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.758301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.448 [2024-07-11 11:03:06.758315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.758329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.448 [2024-07-11 11:03:06.758343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.758357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a000 is same with the state(5) to be set 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.448 11:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:52.448 [2024-07-11 11:03:06.763928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.763956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.763998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.448 [2024-07-11 11:03:06.764258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.448 [2024-07-11 11:03:06.764274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.764997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.449 [2024-07-11 11:03:06.765537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.449 [2024-07-11 11:03:06.765553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.765973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.765990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.766005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.766023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.766038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.766069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.766084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.766100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.766114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.766131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.450 [2024-07-11 11:03:06.766146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.450 [2024-07-11 11:03:06.766236] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2104420 was disconnected and freed. reset controller. 00:16:52.450 [2024-07-11 11:03:06.767366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:52.450 task offset: 81920 on job bdev=Nvme0n1 fails 00:16:52.450 00:16:52.450 Latency(us) 00:16:52.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.450 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.450 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:52.450 Verification LBA range: start 0x0 length 0x400 00:16:52.450 Nvme0n1 : 0.40 1585.00 99.06 158.50 0.00 35656.32 2657.85 34369.99 00:16:52.450 =================================================================================================================== 00:16:52.450 Total : 1585.00 99.06 158.50 0.00 35656.32 2657.85 34369.99 00:16:52.450 [2024-07-11 11:03:06.769246] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:52.450 [2024-07-11 11:03:06.769276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210a000 (9): Bad file descriptor 00:16:52.450 [2024-07-11 11:03:06.862905] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:53.384 11:03:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 222713 00:16:53.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (222713) - No such process 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.385 { 00:16:53.385 "params": { 00:16:53.385 "name": "Nvme$subsystem", 00:16:53.385 "trtype": "$TEST_TRANSPORT", 00:16:53.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.385 "adrfam": "ipv4", 00:16:53.385 "trsvcid": "$NVMF_PORT", 00:16:53.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.385 "hdgst": ${hdgst:-false}, 00:16:53.385 "ddgst": ${ddgst:-false} 00:16:53.385 }, 00:16:53.385 "method": "bdev_nvme_attach_controller" 00:16:53.385 } 00:16:53.385 EOF 00:16:53.385 )") 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:53.385 11:03:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.385 "params": { 00:16:53.385 "name": "Nvme0", 00:16:53.385 "trtype": "tcp", 00:16:53.385 "traddr": "10.0.0.2", 00:16:53.385 "adrfam": "ipv4", 00:16:53.385 "trsvcid": "4420", 00:16:53.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:53.385 "hdgst": false, 00:16:53.385 "ddgst": false 00:16:53.385 }, 00:16:53.385 "method": "bdev_nvme_attach_controller" 00:16:53.385 }' 00:16:53.385 [2024-07-11 11:03:07.807986] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:16:53.385 [2024-07-11 11:03:07.808072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223247 ] 00:16:53.643 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.643 [2024-07-11 11:03:07.871168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.643 [2024-07-11 11:03:07.956905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.900 Running I/O for 1 seconds... 00:16:55.275 00:16:55.275 Latency(us) 00:16:55.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.275 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:55.275 Verification LBA range: start 0x0 length 0x400 00:16:55.275 Nvme0n1 : 1.01 1644.34 102.77 0.00 0.00 38296.28 4587.52 33981.63 00:16:55.275 =================================================================================================================== 00:16:55.275 Total : 1644.34 102.77 0.00 0.00 38296.28 4587.52 33981.63 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.275 rmmod nvme_tcp 00:16:55.275 rmmod nvme_fabrics 00:16:55.275 rmmod nvme_keyring 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 222569 ']' 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 222569 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 222569 ']' 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 222569 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 222569 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 222569' 00:16:55.275 killing process with pid 222569 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 222569 00:16:55.275 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 222569 00:16:55.534 [2024-07-11 11:03:09.791103] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.534 11:03:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.448 11:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.448 11:03:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:57.448 00:16:57.448 real 0m8.763s 00:16:57.448 user 0m20.144s 00:16:57.448 sys 0m2.623s 00:16:57.448 11:03:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.448 11:03:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.448 ************************************ 00:16:57.448 END TEST nvmf_host_management 00:16:57.448 ************************************ 00:16:57.708 11:03:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:57.708 11:03:11 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:57.708 11:03:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.708 11:03:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.708 11:03:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.708 ************************************ 00:16:57.708 START TEST nvmf_lvol 00:16:57.708 ************************************ 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:57.708 * Looking for test storage... 00:16:57.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.708 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.709 11:03:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.709 11:03:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.709 11:03:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.709 11:03:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.709 11:03:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.249 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:00.250 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:00.250 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:00.250 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:00.250 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:00.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:17:00.250 00:17:00.250 --- 10.0.0.2 ping statistics --- 00:17:00.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.250 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:00.250 00:17:00.250 --- 10.0.0.1 ping statistics --- 00:17:00.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.250 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=225573 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 225573 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 225573 ']' 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.250 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.250 [2024-07-11 11:03:14.237015] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:17:00.250 [2024-07-11 11:03:14.237115] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.250 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.250 [2024-07-11 11:03:14.299505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:00.250 [2024-07-11 11:03:14.382496] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.251 [2024-07-11 11:03:14.382567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.251 [2024-07-11 11:03:14.382591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.251 [2024-07-11 11:03:14.382601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.251 [2024-07-11 11:03:14.382611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.251 [2024-07-11 11:03:14.382769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.251 [2024-07-11 11:03:14.382863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.251 [2024-07-11 11:03:14.382866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.251 11:03:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.509 [2024-07-11 11:03:14.739080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.509 11:03:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.767 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:00.767 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.026 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:01.026 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:01.284 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:01.543 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1b71e914-ff89-411f-84d7-3566883daed6 00:17:01.543 11:03:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b71e914-ff89-411f-84d7-3566883daed6 lvol 20 00:17:01.801 11:03:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8dd87962-108f-4330-90d7-6b1839131648 00:17:01.801 11:03:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:02.060 11:03:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8dd87962-108f-4330-90d7-6b1839131648 00:17:02.319 11:03:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:02.577 [2024-07-11 11:03:16.793239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.578 11:03:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:02.836 11:03:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=225998 00:17:02.836 11:03:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:02.836 11:03:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:02.836 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.775 11:03:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8dd87962-108f-4330-90d7-6b1839131648 MY_SNAPSHOT 00:17:04.034 11:03:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2f0ec463-a2fd-4087-8505-7b3262376c9c 00:17:04.034 11:03:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8dd87962-108f-4330-90d7-6b1839131648 30 00:17:04.292 11:03:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f0ec463-a2fd-4087-8505-7b3262376c9c MY_CLONE 00:17:04.551 11:03:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ab91e00c-0b33-41d0-8b2b-21d706bc4c37 00:17:04.551 11:03:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ab91e00c-0b33-41d0-8b2b-21d706bc4c37 00:17:05.122 11:03:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 225998 00:17:13.288 Initializing NVMe Controllers 00:17:13.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:13.288 Controller IO queue size 128, less than required. 00:17:13.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:13.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:13.288 Initialization complete. Launching workers. 00:17:13.288 ======================================================== 00:17:13.288 Latency(us) 00:17:13.288 Device Information : IOPS MiB/s Average min max 00:17:13.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10780.07 42.11 11877.03 1761.17 80586.89 00:17:13.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10651.07 41.61 12027.03 1866.45 64605.44 00:17:13.288 ======================================================== 00:17:13.288 Total : 21431.14 83.72 11951.58 1761.17 80586.89 00:17:13.288 00:17:13.288 11:03:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:13.288 11:03:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8dd87962-108f-4330-90d7-6b1839131648 00:17:13.545 11:03:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b71e914-ff89-411f-84d7-3566883daed6 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.115 rmmod nvme_tcp 00:17:14.115 rmmod nvme_fabrics 00:17:14.115 rmmod nvme_keyring 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 225573 ']' 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 225573 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 225573 ']' 00:17:14.115 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 225573 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 225573 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 225573' 00:17:14.116 killing process with pid 225573 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 225573 00:17:14.116 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 225573 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.377 11:03:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.284 11:03:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.284 00:17:16.284 real 0m18.722s 00:17:16.284 user 1m4.171s 00:17:16.284 sys 0m5.477s 00:17:16.284 11:03:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.284 11:03:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:16.284 ************************************ 00:17:16.284 END TEST nvmf_lvol 00:17:16.284 ************************************ 00:17:16.284 11:03:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.284 11:03:30 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:16.284 11:03:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.284 11:03:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.284 11:03:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.284 ************************************ 00:17:16.284 START TEST nvmf_lvs_grow 00:17:16.284 ************************************ 00:17:16.284 11:03:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:16.543 * Looking for test storage... 00:17:16.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.543 11:03:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.544 11:03:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.453 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:18.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:18.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:18.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:18.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:17:18.454 00:17:18.454 --- 10.0.0.2 ping statistics --- 00:17:18.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.454 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:17:18.454 00:17:18.454 --- 10.0.0.1 ping statistics --- 00:17:18.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.454 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=229243 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 229243 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 229243 ']' 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.454 11:03:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.716 [2024-07-11 11:03:32.884264] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:17:18.716 [2024-07-11 11:03:32.884367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.716 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.716 [2024-07-11 11:03:32.949336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.716 [2024-07-11 11:03:33.038354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.716 [2024-07-11 11:03:33.038416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.716 [2024-07-11 11:03:33.038444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.716 [2024-07-11 11:03:33.038455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.716 [2024-07-11 11:03:33.038465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.716 [2024-07-11 11:03:33.038492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.975 11:03:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:19.235 [2024-07-11 11:03:33.447532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.235 ************************************ 00:17:19.235 START TEST lvs_grow_clean 00:17:19.235 ************************************ 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.235 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:19.493 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:19.493 11:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:19.751 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:19.751 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:19.751 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:20.011 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:20.011 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:20.011 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df3391da-ec53-4db9-a866-e0d6c2c07862 lvol 150 00:17:20.272 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=694b27e4-1629-408a-b316-23ada104e09a 00:17:20.272 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:20.272 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:20.533 [2024-07-11 11:03:34.746900] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:20.533 [2024-07-11 11:03:34.746981] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:20.533 true 00:17:20.533 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:20.533 11:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:20.793 11:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:20.793 11:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:21.053 11:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 694b27e4-1629-408a-b316-23ada104e09a 00:17:21.312 11:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:21.570 [2024-07-11 11:03:35.782050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.570 11:03:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=229683 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 229683 /var/tmp/bdevperf.sock 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 229683 ']' 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.829 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.829 [2024-07-11 11:03:36.139521] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:17:21.829 [2024-07-11 11:03:36.139609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229683 ] 00:17:21.829 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.829 [2024-07-11 11:03:36.196650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.088 [2024-07-11 11:03:36.281693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.088 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.088 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:22.088 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:22.346 Nvme0n1 00:17:22.346 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:22.606 [ 00:17:22.606 { 00:17:22.606 "name": "Nvme0n1", 00:17:22.606 "aliases": [ 00:17:22.606 "694b27e4-1629-408a-b316-23ada104e09a" 00:17:22.606 ], 00:17:22.606 "product_name": "NVMe disk", 00:17:22.606 "block_size": 4096, 00:17:22.606 "num_blocks": 38912, 00:17:22.606 "uuid": "694b27e4-1629-408a-b316-23ada104e09a", 00:17:22.606 "assigned_rate_limits": { 00:17:22.606 "rw_ios_per_sec": 0, 00:17:22.606 "rw_mbytes_per_sec": 0, 00:17:22.606 "r_mbytes_per_sec": 0, 00:17:22.606 "w_mbytes_per_sec": 0 00:17:22.606 }, 00:17:22.606 "claimed": false, 00:17:22.606 "zoned": false, 00:17:22.606 "supported_io_types": { 00:17:22.606 "read": true, 00:17:22.606 "write": true, 00:17:22.606 "unmap": true, 00:17:22.606 "flush": true, 00:17:22.606 "reset": true, 00:17:22.606 "nvme_admin": true, 00:17:22.606 "nvme_io": true, 00:17:22.606 "nvme_io_md": false, 00:17:22.606 "write_zeroes": true, 00:17:22.606 "zcopy": false, 00:17:22.606 "get_zone_info": false, 00:17:22.606 "zone_management": false, 00:17:22.606 "zone_append": false, 00:17:22.606 "compare": true, 00:17:22.606 "compare_and_write": true, 00:17:22.606 "abort": true, 00:17:22.606 "seek_hole": false, 00:17:22.606 "seek_data": false, 00:17:22.606 "copy": true, 00:17:22.606 "nvme_iov_md": false 00:17:22.606 }, 00:17:22.606 "memory_domains": [ 00:17:22.606 { 00:17:22.606 "dma_device_id": "system", 00:17:22.606 "dma_device_type": 1 00:17:22.606 } 00:17:22.606 ], 00:17:22.606 "driver_specific": { 00:17:22.606 "nvme": [ 00:17:22.606 { 00:17:22.606 "trid": { 00:17:22.606 "trtype": "TCP", 00:17:22.606 "adrfam": "IPv4", 00:17:22.606 "traddr": "10.0.0.2", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:22.606 }, 00:17:22.606 "ctrlr_data": { 00:17:22.606 "cntlid": 1, 00:17:22.606 "vendor_id": "0x8086", 00:17:22.606 "model_number": "SPDK bdev Controller", 00:17:22.606 "serial_number": "SPDK0", 00:17:22.606 "firmware_revision": "24.09", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:22.606 "oacs": { 00:17:22.606 "security": 0, 00:17:22.606 "format": 0, 00:17:22.606 "firmware": 0, 00:17:22.606 "ns_manage": 0 00:17:22.606 }, 00:17:22.606 "multi_ctrlr": true, 00:17:22.606 "ana_reporting": false 00:17:22.606 }, 00:17:22.606 "vs": { 00:17:22.606 "nvme_version": "1.3" 00:17:22.606 }, 00:17:22.606 "ns_data": { 00:17:22.606 "id": 1, 00:17:22.606 "can_share": true 00:17:22.606 } 00:17:22.606 } 00:17:22.606 ], 00:17:22.606 "mp_policy": "active_passive" 00:17:22.606 } 00:17:22.606 } 00:17:22.606 ] 00:17:22.606 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=229700 00:17:22.606 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:22.606 11:03:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:22.866 Running I/O for 10 seconds... 00:17:23.807 Latency(us) 00:17:23.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.807 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:17:23.807 =================================================================================================================== 00:17:23.807 Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:17:23.807 00:17:24.749 11:03:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:24.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.749 Nvme0n1 : 2.00 15312.50 59.81 0.00 0.00 0.00 0.00 0.00 00:17:24.749 =================================================================================================================== 00:17:24.749 Total : 15312.50 59.81 0.00 0.00 0.00 0.00 0.00 00:17:24.749 00:17:25.008 true 00:17:25.008 11:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:25.008 11:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:25.266 11:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:25.266 11:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:25.266 11:03:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 229700 00:17:25.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.837 Nvme0n1 : 3.00 15437.00 60.30 0.00 0.00 0.00 0.00 0.00 00:17:25.837 =================================================================================================================== 00:17:25.837 Total : 15437.00 60.30 0.00 0.00 0.00 0.00 0.00 00:17:25.837 00:17:26.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.778 Nvme0n1 : 4.00 15517.00 60.61 0.00 0.00 0.00 0.00 0.00 00:17:26.778 =================================================================================================================== 00:17:26.778 Total : 15517.00 60.61 0.00 0.00 0.00 0.00 0.00 00:17:26.778 00:17:27.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.717 Nvme0n1 : 5.00 15588.60 60.89 0.00 0.00 0.00 0.00 0.00 00:17:27.717 =================================================================================================================== 00:17:27.717 Total : 15588.60 60.89 0.00 0.00 0.00 0.00 0.00 00:17:27.717 00:17:29.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.099 Nvme0n1 : 6.00 15636.33 61.08 0.00 0.00 0.00 0.00 0.00 00:17:29.099 =================================================================================================================== 00:17:29.099 Total : 15636.33 61.08 0.00 0.00 0.00 0.00 0.00 00:17:29.099 00:17:29.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.677 Nvme0n1 : 7.00 15688.57 61.28 0.00 0.00 0.00 0.00 0.00 00:17:29.677 =================================================================================================================== 00:17:29.677 Total : 15688.57 61.28 0.00 0.00 0.00 0.00 0.00 00:17:29.677 00:17:31.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.055 Nvme0n1 : 8.00 15727.75 61.44 0.00 0.00 0.00 0.00 0.00 00:17:31.055 =================================================================================================================== 00:17:31.055 Total : 15727.75 61.44 0.00 0.00 0.00 0.00 0.00 00:17:31.055 00:17:31.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.995 Nvme0n1 : 9.00 15744.11 61.50 0.00 0.00 0.00 0.00 0.00 00:17:31.995 =================================================================================================================== 00:17:31.995 Total : 15744.11 61.50 0.00 0.00 0.00 0.00 0.00 00:17:31.995 00:17:32.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.932 Nvme0n1 : 10.00 15769.90 61.60 0.00 0.00 0.00 0.00 0.00 00:17:32.932 =================================================================================================================== 00:17:32.932 Total : 15769.90 61.60 0.00 0.00 0.00 0.00 0.00 00:17:32.932 00:17:32.932 00:17:32.932 Latency(us) 00:17:32.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.932 Nvme0n1 : 10.01 15770.35 61.60 0.00 0.00 8111.79 3859.34 16019.91 00:17:32.932 =================================================================================================================== 00:17:32.932 Total : 15770.35 61.60 0.00 0.00 8111.79 3859.34 16019.91 00:17:32.932 0 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 229683 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 229683 ']' 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 229683 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 229683 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 229683' 00:17:32.932 killing process with pid 229683 00:17:32.932 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 229683 00:17:32.932 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.932 00:17:32.932 Latency(us) 00:17:32.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.933 =================================================================================================================== 00:17:32.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.933 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 229683 00:17:33.190 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.448 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:33.798 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:33.798 11:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:33.798 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:33.798 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:33.798 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:34.085 [2024-07-11 11:03:48.431502] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:34.085 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:34.364 request: 00:17:34.364 { 00:17:34.364 "uuid": "df3391da-ec53-4db9-a866-e0d6c2c07862", 00:17:34.364 "method": "bdev_lvol_get_lvstores", 00:17:34.364 "req_id": 1 00:17:34.364 } 00:17:34.364 Got JSON-RPC error response 00:17:34.364 response: 00:17:34.364 { 00:17:34.364 "code": -19, 00:17:34.364 "message": "No such device" 00:17:34.364 } 00:17:34.364 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:34.364 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.364 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.364 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.364 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:34.635 aio_bdev 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 694b27e4-1629-408a-b316-23ada104e09a 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=694b27e4-1629-408a-b316-23ada104e09a 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:34.635 11:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:34.908 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 694b27e4-1629-408a-b316-23ada104e09a -t 2000 00:17:35.200 [ 00:17:35.200 { 00:17:35.200 "name": "694b27e4-1629-408a-b316-23ada104e09a", 00:17:35.200 "aliases": [ 00:17:35.200 "lvs/lvol" 00:17:35.200 ], 00:17:35.200 "product_name": "Logical Volume", 00:17:35.200 "block_size": 4096, 00:17:35.200 "num_blocks": 38912, 00:17:35.200 "uuid": "694b27e4-1629-408a-b316-23ada104e09a", 00:17:35.200 "assigned_rate_limits": { 00:17:35.200 "rw_ios_per_sec": 0, 00:17:35.200 "rw_mbytes_per_sec": 0, 00:17:35.200 "r_mbytes_per_sec": 0, 00:17:35.200 "w_mbytes_per_sec": 0 00:17:35.200 }, 00:17:35.200 "claimed": false, 00:17:35.200 "zoned": false, 00:17:35.200 "supported_io_types": { 00:17:35.200 "read": true, 00:17:35.200 "write": true, 00:17:35.200 "unmap": true, 00:17:35.200 "flush": false, 00:17:35.200 "reset": true, 00:17:35.200 "nvme_admin": false, 00:17:35.200 "nvme_io": false, 00:17:35.200 "nvme_io_md": false, 00:17:35.200 "write_zeroes": true, 00:17:35.200 "zcopy": false, 00:17:35.200 "get_zone_info": false, 00:17:35.200 "zone_management": false, 00:17:35.200 "zone_append": false, 00:17:35.200 "compare": false, 00:17:35.200 "compare_and_write": false, 00:17:35.200 "abort": false, 00:17:35.200 "seek_hole": true, 00:17:35.200 "seek_data": true, 00:17:35.200 "copy": false, 00:17:35.200 "nvme_iov_md": false 00:17:35.200 }, 00:17:35.200 "driver_specific": { 00:17:35.200 "lvol": { 00:17:35.200 "lvol_store_uuid": "df3391da-ec53-4db9-a866-e0d6c2c07862", 00:17:35.200 "base_bdev": "aio_bdev", 00:17:35.200 "thin_provision": false, 00:17:35.200 "num_allocated_clusters": 38, 00:17:35.200 "snapshot": false, 00:17:35.200 "clone": false, 00:17:35.200 "esnap_clone": false 00:17:35.200 } 00:17:35.200 } 00:17:35.200 } 00:17:35.200 ] 00:17:35.200 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:35.200 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:35.200 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:35.472 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:35.472 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:35.472 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:35.760 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:35.760 11:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 694b27e4-1629-408a-b316-23ada104e09a 00:17:36.031 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df3391da-ec53-4db9-a866-e0d6c2c07862 00:17:36.304 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.591 00:17:36.591 real 0m17.285s 00:17:36.591 user 0m16.715s 00:17:36.591 sys 0m1.944s 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:36.591 ************************************ 00:17:36.591 END TEST lvs_grow_clean 00:17:36.591 ************************************ 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.591 ************************************ 00:17:36.591 START TEST lvs_grow_dirty 00:17:36.591 ************************************ 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.591 11:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:36.857 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:36.857 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:37.115 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:37.115 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:37.115 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:37.374 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:37.374 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:37.374 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2d2e994-a036-46f5-831a-7c4723bb9ace lvol 150 00:17:37.633 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:37.633 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.634 11:03:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:37.894 [2024-07-11 11:03:52.107875] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:37.894 [2024-07-11 11:03:52.107977] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:37.894 true 00:17:37.894 11:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:37.894 11:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:38.154 11:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:38.154 11:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:38.412 11:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:38.670 11:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:38.670 [2024-07-11 11:03:53.086907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=231743 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 231743 /var/tmp/bdevperf.sock 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 231743 ']' 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.928 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:39.188 [2024-07-11 11:03:53.382592] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:17:39.188 [2024-07-11 11:03:53.382675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231743 ] 00:17:39.188 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.188 [2024-07-11 11:03:53.438922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.188 [2024-07-11 11:03:53.522662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.447 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.447 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:39.447 11:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:39.704 Nvme0n1 00:17:39.704 11:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:39.962 [ 00:17:39.962 { 00:17:39.962 "name": "Nvme0n1", 00:17:39.962 "aliases": [ 00:17:39.962 "0464a0fd-5291-4593-90af-8d7fc5120f1e" 00:17:39.962 ], 00:17:39.962 "product_name": "NVMe disk", 00:17:39.962 "block_size": 4096, 00:17:39.962 "num_blocks": 38912, 00:17:39.962 "uuid": "0464a0fd-5291-4593-90af-8d7fc5120f1e", 00:17:39.962 "assigned_rate_limits": { 00:17:39.962 "rw_ios_per_sec": 0, 00:17:39.962 "rw_mbytes_per_sec": 0, 00:17:39.962 "r_mbytes_per_sec": 0, 00:17:39.962 "w_mbytes_per_sec": 0 00:17:39.962 }, 00:17:39.962 "claimed": false, 00:17:39.962 "zoned": false, 00:17:39.962 "supported_io_types": { 00:17:39.962 "read": true, 00:17:39.962 "write": true, 00:17:39.962 "unmap": true, 00:17:39.962 "flush": true, 00:17:39.962 "reset": true, 00:17:39.962 "nvme_admin": true, 00:17:39.962 "nvme_io": true, 00:17:39.962 "nvme_io_md": false, 00:17:39.962 "write_zeroes": true, 00:17:39.962 "zcopy": false, 00:17:39.962 "get_zone_info": false, 00:17:39.962 "zone_management": false, 00:17:39.962 "zone_append": false, 00:17:39.962 "compare": true, 00:17:39.962 "compare_and_write": true, 00:17:39.962 "abort": true, 00:17:39.962 "seek_hole": false, 00:17:39.962 "seek_data": false, 00:17:39.962 "copy": true, 00:17:39.962 "nvme_iov_md": false 00:17:39.962 }, 00:17:39.962 "memory_domains": [ 00:17:39.962 { 00:17:39.962 "dma_device_id": "system", 00:17:39.962 "dma_device_type": 1 00:17:39.962 } 00:17:39.962 ], 00:17:39.962 "driver_specific": { 00:17:39.962 "nvme": [ 00:17:39.962 { 00:17:39.962 "trid": { 00:17:39.962 "trtype": "TCP", 00:17:39.962 "adrfam": "IPv4", 00:17:39.962 "traddr": "10.0.0.2", 00:17:39.962 "trsvcid": "4420", 00:17:39.962 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:39.962 }, 00:17:39.962 "ctrlr_data": { 00:17:39.962 "cntlid": 1, 00:17:39.962 "vendor_id": "0x8086", 00:17:39.962 "model_number": "SPDK bdev Controller", 00:17:39.962 "serial_number": "SPDK0", 00:17:39.962 "firmware_revision": "24.09", 00:17:39.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:39.962 "oacs": { 00:17:39.962 "security": 0, 00:17:39.962 "format": 0, 00:17:39.962 "firmware": 0, 00:17:39.962 "ns_manage": 0 00:17:39.962 }, 00:17:39.962 "multi_ctrlr": true, 00:17:39.962 "ana_reporting": false 00:17:39.962 }, 00:17:39.962 "vs": { 00:17:39.962 "nvme_version": "1.3" 00:17:39.962 }, 00:17:39.962 "ns_data": { 00:17:39.962 "id": 1, 00:17:39.962 "can_share": true 00:17:39.962 } 00:17:39.962 } 00:17:39.962 ], 00:17:39.962 "mp_policy": "active_passive" 00:17:39.962 } 00:17:39.962 } 00:17:39.962 ] 00:17:39.962 11:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=231878 00:17:39.962 11:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:39.962 11:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:40.221 Running I/O for 10 seconds... 00:17:41.162 Latency(us) 00:17:41.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.162 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:17:41.162 =================================================================================================================== 00:17:41.162 Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:17:41.162 00:17:42.102 11:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:42.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.102 Nvme0n1 : 2.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:17:42.102 =================================================================================================================== 00:17:42.102 Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:17:42.102 00:17:42.361 true 00:17:42.361 11:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:42.361 11:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:42.619 11:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:42.619 11:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:42.619 11:03:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 231878 00:17:43.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.189 Nvme0n1 : 3.00 15431.33 60.28 0.00 0.00 0.00 0.00 0.00 00:17:43.189 =================================================================================================================== 00:17:43.189 Total : 15431.33 60.28 0.00 0.00 0.00 0.00 0.00 00:17:43.189 00:17:44.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.131 Nvme0n1 : 4.00 15503.50 60.56 0.00 0.00 0.00 0.00 0.00 00:17:44.131 =================================================================================================================== 00:17:44.131 Total : 15503.50 60.56 0.00 0.00 0.00 0.00 0.00 00:17:44.131 00:17:45.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.074 Nvme0n1 : 5.00 15578.20 60.85 0.00 0.00 0.00 0.00 0.00 00:17:45.074 =================================================================================================================== 00:17:45.074 Total : 15578.20 60.85 0.00 0.00 0.00 0.00 0.00 00:17:45.074 00:17:46.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.458 Nvme0n1 : 6.00 15627.83 61.05 0.00 0.00 0.00 0.00 0.00 00:17:46.458 =================================================================================================================== 00:17:46.458 Total : 15627.83 61.05 0.00 0.00 0.00 0.00 0.00 00:17:46.458 00:17:47.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.401 Nvme0n1 : 7.00 15663.14 61.18 0.00 0.00 0.00 0.00 0.00 00:17:47.401 =================================================================================================================== 00:17:47.401 Total : 15663.14 61.18 0.00 0.00 0.00 0.00 0.00 00:17:47.401 00:17:48.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.343 Nvme0n1 : 8.00 15713.62 61.38 0.00 0.00 0.00 0.00 0.00 00:17:48.343 =================================================================================================================== 00:17:48.343 Total : 15713.62 61.38 0.00 0.00 0.00 0.00 0.00 00:17:48.343 00:17:49.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.287 Nvme0n1 : 9.00 15738.78 61.48 0.00 0.00 0.00 0.00 0.00 00:17:49.287 =================================================================================================================== 00:17:49.287 Total : 15738.78 61.48 0.00 0.00 0.00 0.00 0.00 00:17:49.287 00:17:50.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.227 Nvme0n1 : 10.00 15765.40 61.58 0.00 0.00 0.00 0.00 0.00 00:17:50.227 =================================================================================================================== 00:17:50.227 Total : 15765.40 61.58 0.00 0.00 0.00 0.00 0.00 00:17:50.227 00:17:50.227 00:17:50.227 Latency(us) 00:17:50.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.227 Nvme0n1 : 10.00 15772.53 61.61 0.00 0.00 8110.87 4320.52 17670.45 00:17:50.227 =================================================================================================================== 00:17:50.227 Total : 15772.53 61.61 0.00 0.00 8110.87 4320.52 17670.45 00:17:50.227 0 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 231743 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 231743 ']' 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 231743 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 231743 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 231743' 00:17:50.227 killing process with pid 231743 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 231743 00:17:50.227 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.227 00:17:50.227 Latency(us) 00:17:50.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.227 =================================================================================================================== 00:17:50.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.227 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 231743 00:17:50.485 11:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.743 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:51.001 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:51.001 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 229243 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 229243 00:17:51.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 229243 Killed "${NVMF_APP[@]}" "$@" 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=233149 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 233149 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 233149 ']' 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.260 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 [2024-07-11 11:04:05.615652] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:17:51.260 [2024-07-11 11:04:05.615765] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.260 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.260 [2024-07-11 11:04:05.681790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.519 [2024-07-11 11:04:05.765461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.519 [2024-07-11 11:04:05.765515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.519 [2024-07-11 11:04:05.765546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.519 [2024-07-11 11:04:05.765557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.519 [2024-07-11 11:04:05.765567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.519 [2024-07-11 11:04:05.765594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.519 11:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.778 [2024-07-11 11:04:06.180138] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:51.778 [2024-07-11 11:04:06.180296] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:51.778 [2024-07-11 11:04:06.180344] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.778 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:52.039 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0464a0fd-5291-4593-90af-8d7fc5120f1e -t 2000 00:17:52.298 [ 00:17:52.298 { 00:17:52.298 "name": "0464a0fd-5291-4593-90af-8d7fc5120f1e", 00:17:52.298 "aliases": [ 00:17:52.298 "lvs/lvol" 00:17:52.298 ], 00:17:52.298 "product_name": "Logical Volume", 00:17:52.298 "block_size": 4096, 00:17:52.298 "num_blocks": 38912, 00:17:52.298 "uuid": "0464a0fd-5291-4593-90af-8d7fc5120f1e", 00:17:52.298 "assigned_rate_limits": { 00:17:52.298 "rw_ios_per_sec": 0, 00:17:52.298 "rw_mbytes_per_sec": 0, 00:17:52.298 "r_mbytes_per_sec": 0, 00:17:52.298 "w_mbytes_per_sec": 0 00:17:52.298 }, 00:17:52.298 "claimed": false, 00:17:52.298 "zoned": false, 00:17:52.298 "supported_io_types": { 00:17:52.298 "read": true, 00:17:52.298 "write": true, 00:17:52.298 "unmap": true, 00:17:52.298 "flush": false, 00:17:52.298 "reset": true, 00:17:52.298 "nvme_admin": false, 00:17:52.298 "nvme_io": false, 00:17:52.298 "nvme_io_md": false, 00:17:52.298 "write_zeroes": true, 00:17:52.298 "zcopy": false, 00:17:52.298 "get_zone_info": false, 00:17:52.298 "zone_management": false, 00:17:52.298 "zone_append": false, 00:17:52.298 "compare": false, 00:17:52.298 "compare_and_write": false, 00:17:52.298 "abort": false, 00:17:52.298 "seek_hole": true, 00:17:52.298 "seek_data": true, 00:17:52.298 "copy": false, 00:17:52.298 "nvme_iov_md": false 00:17:52.298 }, 00:17:52.298 "driver_specific": { 00:17:52.298 "lvol": { 00:17:52.298 "lvol_store_uuid": "a2d2e994-a036-46f5-831a-7c4723bb9ace", 00:17:52.298 "base_bdev": "aio_bdev", 00:17:52.298 "thin_provision": false, 00:17:52.298 "num_allocated_clusters": 38, 00:17:52.298 "snapshot": false, 00:17:52.298 "clone": false, 00:17:52.298 "esnap_clone": false 00:17:52.298 } 00:17:52.298 } 00:17:52.298 } 00:17:52.298 ] 00:17:52.298 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:52.298 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:52.298 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:52.557 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:52.557 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:52.557 11:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:52.817 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:52.817 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:53.076 [2024-07-11 11:04:07.425137] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:53.076 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:53.335 request: 00:17:53.335 { 00:17:53.335 "uuid": "a2d2e994-a036-46f5-831a-7c4723bb9ace", 00:17:53.335 "method": "bdev_lvol_get_lvstores", 00:17:53.335 "req_id": 1 00:17:53.335 } 00:17:53.335 Got JSON-RPC error response 00:17:53.335 response: 00:17:53.335 { 00:17:53.335 "code": -19, 00:17:53.335 "message": "No such device" 00:17:53.335 } 00:17:53.594 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:53.594 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.594 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.594 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.594 11:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:53.852 aio_bdev 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:53.852 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:54.111 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0464a0fd-5291-4593-90af-8d7fc5120f1e -t 2000 00:17:54.368 [ 00:17:54.368 { 00:17:54.368 "name": "0464a0fd-5291-4593-90af-8d7fc5120f1e", 00:17:54.368 "aliases": [ 00:17:54.368 "lvs/lvol" 00:17:54.368 ], 00:17:54.368 "product_name": "Logical Volume", 00:17:54.368 "block_size": 4096, 00:17:54.368 "num_blocks": 38912, 00:17:54.368 "uuid": "0464a0fd-5291-4593-90af-8d7fc5120f1e", 00:17:54.368 "assigned_rate_limits": { 00:17:54.368 "rw_ios_per_sec": 0, 00:17:54.368 "rw_mbytes_per_sec": 0, 00:17:54.368 "r_mbytes_per_sec": 0, 00:17:54.368 "w_mbytes_per_sec": 0 00:17:54.368 }, 00:17:54.368 "claimed": false, 00:17:54.368 "zoned": false, 00:17:54.368 "supported_io_types": { 00:17:54.368 "read": true, 00:17:54.368 "write": true, 00:17:54.368 "unmap": true, 00:17:54.368 "flush": false, 00:17:54.368 "reset": true, 00:17:54.368 "nvme_admin": false, 00:17:54.368 "nvme_io": false, 00:17:54.368 "nvme_io_md": false, 00:17:54.368 "write_zeroes": true, 00:17:54.368 "zcopy": false, 00:17:54.368 "get_zone_info": false, 00:17:54.368 "zone_management": false, 00:17:54.368 "zone_append": false, 00:17:54.368 "compare": false, 00:17:54.368 "compare_and_write": false, 00:17:54.368 "abort": false, 00:17:54.368 "seek_hole": true, 00:17:54.368 "seek_data": true, 00:17:54.368 "copy": false, 00:17:54.368 "nvme_iov_md": false 00:17:54.368 }, 00:17:54.368 "driver_specific": { 00:17:54.368 "lvol": { 00:17:54.368 "lvol_store_uuid": "a2d2e994-a036-46f5-831a-7c4723bb9ace", 00:17:54.368 "base_bdev": "aio_bdev", 00:17:54.368 "thin_provision": false, 00:17:54.368 "num_allocated_clusters": 38, 00:17:54.368 "snapshot": false, 00:17:54.368 "clone": false, 00:17:54.368 "esnap_clone": false 00:17:54.368 } 00:17:54.368 } 00:17:54.368 } 00:17:54.368 ] 00:17:54.368 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:54.368 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:54.368 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:54.627 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:54.627 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:54.627 11:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:54.886 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:54.886 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0464a0fd-5291-4593-90af-8d7fc5120f1e 00:17:55.146 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2d2e994-a036-46f5-831a-7c4723bb9ace 00:17:55.406 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:55.666 00:17:55.666 real 0m19.062s 00:17:55.666 user 0m48.162s 00:17:55.666 sys 0m4.529s 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:55.666 ************************************ 00:17:55.666 END TEST lvs_grow_dirty 00:17:55.666 ************************************ 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:55.666 nvmf_trace.0 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.666 11:04:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.666 rmmod nvme_tcp 00:17:55.666 rmmod nvme_fabrics 00:17:55.666 rmmod nvme_keyring 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 233149 ']' 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 233149 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 233149 ']' 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 233149 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 233149 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 233149' 00:17:55.666 killing process with pid 233149 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 233149 00:17:55.666 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 233149 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.924 11:04:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.459 11:04:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.459 00:17:58.459 real 0m41.619s 00:17:58.459 user 1m10.627s 00:17:58.459 sys 0m8.327s 00:17:58.459 11:04:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.459 11:04:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:58.459 ************************************ 00:17:58.459 END TEST nvmf_lvs_grow 00:17:58.459 ************************************ 00:17:58.459 11:04:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:58.459 11:04:12 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:58.459 11:04:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.459 11:04:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.459 11:04:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.459 ************************************ 00:17:58.459 START TEST nvmf_bdev_io_wait 00:17:58.459 ************************************ 00:17:58.459 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:58.459 * Looking for test storage... 00:17:58.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.459 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.459 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:58.459 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.460 11:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.362 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:18:00.363 00:18:00.363 --- 10.0.0.2 ping statistics --- 00:18:00.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.363 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:18:00.363 00:18:00.363 --- 10.0.0.1 ping statistics --- 00:18:00.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.363 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=235686 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 235686 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 235686 ']' 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.363 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.363 [2024-07-11 11:04:14.717075] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:00.363 [2024-07-11 11:04:14.717161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.363 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.364 [2024-07-11 11:04:14.780877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.624 [2024-07-11 11:04:14.864537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.624 [2024-07-11 11:04:14.864595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.624 [2024-07-11 11:04:14.864622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.624 [2024-07-11 11:04:14.864633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.624 [2024-07-11 11:04:14.864642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.624 [2024-07-11 11:04:14.864744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.624 [2024-07-11 11:04:14.864866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.624 [2024-07-11 11:04:14.864930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.624 [2024-07-11 11:04:14.864933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.624 11:04:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.624 [2024-07-11 11:04:15.029437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.624 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.884 Malloc0 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.884 [2024-07-11 11:04:15.097396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=235750 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=235752 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:00.884 { 00:18:00.884 "params": { 00:18:00.884 "name": "Nvme$subsystem", 00:18:00.884 "trtype": "$TEST_TRANSPORT", 00:18:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.884 "adrfam": "ipv4", 00:18:00.884 "trsvcid": "$NVMF_PORT", 00:18:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.884 "hdgst": ${hdgst:-false}, 00:18:00.884 "ddgst": ${ddgst:-false} 00:18:00.884 }, 00:18:00.884 "method": "bdev_nvme_attach_controller" 00:18:00.884 } 00:18:00.884 EOF 00:18:00.884 )") 00:18:00.884 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=235754 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:00.885 { 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme$subsystem", 00:18:00.885 "trtype": "$TEST_TRANSPORT", 00:18:00.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "$NVMF_PORT", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.885 "hdgst": ${hdgst:-false}, 00:18:00.885 "ddgst": ${ddgst:-false} 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 } 00:18:00.885 EOF 00:18:00.885 )") 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=235757 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:00.885 { 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme$subsystem", 00:18:00.885 "trtype": "$TEST_TRANSPORT", 00:18:00.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "$NVMF_PORT", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.885 "hdgst": ${hdgst:-false}, 00:18:00.885 "ddgst": ${ddgst:-false} 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 } 00:18:00.885 EOF 00:18:00.885 )") 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:00.885 { 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme$subsystem", 00:18:00.885 "trtype": "$TEST_TRANSPORT", 00:18:00.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "$NVMF_PORT", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.885 "hdgst": ${hdgst:-false}, 00:18:00.885 "ddgst": ${ddgst:-false} 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 } 00:18:00.885 EOF 00:18:00.885 )") 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 235750 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme1", 00:18:00.885 "trtype": "tcp", 00:18:00.885 "traddr": "10.0.0.2", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "4420", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.885 "hdgst": false, 00:18:00.885 "ddgst": false 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 }' 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme1", 00:18:00.885 "trtype": "tcp", 00:18:00.885 "traddr": "10.0.0.2", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "4420", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.885 "hdgst": false, 00:18:00.885 "ddgst": false 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 }' 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme1", 00:18:00.885 "trtype": "tcp", 00:18:00.885 "traddr": "10.0.0.2", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "4420", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.885 "hdgst": false, 00:18:00.885 "ddgst": false 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 }' 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:00.885 11:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:00.885 "params": { 00:18:00.885 "name": "Nvme1", 00:18:00.885 "trtype": "tcp", 00:18:00.885 "traddr": "10.0.0.2", 00:18:00.885 "adrfam": "ipv4", 00:18:00.885 "trsvcid": "4420", 00:18:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.885 "hdgst": false, 00:18:00.885 "ddgst": false 00:18:00.885 }, 00:18:00.885 "method": "bdev_nvme_attach_controller" 00:18:00.885 }' 00:18:00.885 [2024-07-11 11:04:15.143282] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:00.885 [2024-07-11 11:04:15.143282] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:00.885 [2024-07-11 11:04:15.143376] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 11:04:15.143377] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:00.885 --proc-type=auto ] 00:18:00.885 [2024-07-11 11:04:15.143379] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:00.885 [2024-07-11 11:04:15.143380] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:00.885 [2024-07-11 11:04:15.143454] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 11:04:15.143454] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:00.885 --proc-type=auto ] 00:18:00.885 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.885 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.144 [2024-07-11 11:04:15.315795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.144 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.144 [2024-07-11 11:04:15.389994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:01.144 [2024-07-11 11:04:15.416486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.144 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.144 [2024-07-11 11:04:15.492971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:01.144 [2024-07-11 11:04:15.518349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.403 [2024-07-11 11:04:15.585784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.403 [2024-07-11 11:04:15.590006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:01.403 [2024-07-11 11:04:15.653087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:01.403 Running I/O for 1 seconds... 00:18:01.403 Running I/O for 1 seconds... 00:18:01.662 Running I/O for 1 seconds... 00:18:01.662 Running I/O for 1 seconds... 00:18:02.598 00:18:02.598 Latency(us) 00:18:02.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.598 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:02.598 Nvme1n1 : 1.03 6838.94 26.71 0.00 0.00 18464.76 8155.59 32039.82 00:18:02.598 =================================================================================================================== 00:18:02.598 Total : 6838.94 26.71 0.00 0.00 18464.76 8155.59 32039.82 00:18:02.598 00:18:02.598 Latency(us) 00:18:02.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.598 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:02.598 Nvme1n1 : 1.01 9235.39 36.08 0.00 0.00 13787.99 9320.68 26214.40 00:18:02.598 =================================================================================================================== 00:18:02.598 Total : 9235.39 36.08 0.00 0.00 13787.99 9320.68 26214.40 00:18:02.598 00:18:02.598 Latency(us) 00:18:02.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.598 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:02.598 Nvme1n1 : 1.01 6768.95 26.44 0.00 0.00 18844.68 5922.51 42525.58 00:18:02.598 =================================================================================================================== 00:18:02.598 Total : 6768.95 26.44 0.00 0.00 18844.68 5922.51 42525.58 00:18:02.598 00:18:02.598 Latency(us) 00:18:02.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.598 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:02.598 Nvme1n1 : 1.00 196126.30 766.12 0.00 0.00 650.13 273.07 773.69 00:18:02.598 =================================================================================================================== 00:18:02.598 Total : 196126.30 766.12 0.00 0.00 650.13 273.07 773.69 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 235752 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 235754 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 235757 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.856 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:02.857 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.857 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:02.857 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.857 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.857 rmmod nvme_tcp 00:18:02.857 rmmod nvme_fabrics 00:18:02.857 rmmod nvme_keyring 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 235686 ']' 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 235686 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 235686 ']' 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 235686 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 235686 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 235686' 00:18:03.117 killing process with pid 235686 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 235686 00:18:03.117 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 235686 00:18:03.377 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.378 11:04:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.282 11:04:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.282 00:18:05.282 real 0m7.239s 00:18:05.282 user 0m16.733s 00:18:05.282 sys 0m3.430s 00:18:05.282 11:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.282 11:04:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:05.282 ************************************ 00:18:05.282 END TEST nvmf_bdev_io_wait 00:18:05.282 ************************************ 00:18:05.282 11:04:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:05.282 11:04:19 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:05.282 11:04:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:05.282 11:04:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.282 11:04:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.282 ************************************ 00:18:05.282 START TEST nvmf_queue_depth 00:18:05.282 ************************************ 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:05.283 * Looking for test storage... 00:18:05.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.283 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.541 11:04:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.542 11:04:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.447 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.448 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:07.708 00:18:07.708 --- 10.0.0.2 ping statistics --- 00:18:07.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.708 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:18:07.708 00:18:07.708 --- 10.0.0.1 ping statistics --- 00:18:07.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.708 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=237972 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 237972 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 237972 ']' 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.708 11:04:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.708 [2024-07-11 11:04:21.968524] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:07.708 [2024-07-11 11:04:21.968609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.708 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.708 [2024-07-11 11:04:22.032127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.708 [2024-07-11 11:04:22.119938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.708 [2024-07-11 11:04:22.120000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.708 [2024-07-11 11:04:22.120013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.708 [2024-07-11 11:04:22.120039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.708 [2024-07-11 11:04:22.120048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.708 [2024-07-11 11:04:22.120073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 [2024-07-11 11:04:22.260929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 Malloc0 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 [2024-07-11 11:04:22.322777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=237993 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 237993 /var/tmp/bdevperf.sock 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 237993 ']' 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.970 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.970 [2024-07-11 11:04:22.366004] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:07.970 [2024-07-11 11:04:22.366104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237993 ] 00:18:07.970 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.228 [2024-07-11 11:04:22.424496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.228 [2024-07-11 11:04:22.508076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.228 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.228 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:08.228 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:08.228 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.228 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 NVMe0n1 00:18:08.487 11:04:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.487 11:04:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.487 Running I/O for 10 seconds... 00:18:18.505 00:18:18.505 Latency(us) 00:18:18.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.505 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:18.505 Verification LBA range: start 0x0 length 0x4000 00:18:18.505 NVMe0n1 : 10.10 8889.26 34.72 0.00 0.00 114665.30 20874.43 79225.74 00:18:18.505 =================================================================================================================== 00:18:18.505 Total : 8889.26 34.72 0.00 0.00 114665.30 20874.43 79225.74 00:18:18.505 0 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 237993 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 237993 ']' 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 237993 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237993 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237993' 00:18:18.766 killing process with pid 237993 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 237993 00:18:18.766 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.766 00:18:18.766 Latency(us) 00:18:18.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.766 =================================================================================================================== 00:18:18.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.766 11:04:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 237993 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.028 rmmod nvme_tcp 00:18:19.028 rmmod nvme_fabrics 00:18:19.028 rmmod nvme_keyring 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 237972 ']' 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 237972 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 237972 ']' 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 237972 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 237972 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 237972' 00:18:19.028 killing process with pid 237972 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 237972 00:18:19.028 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 237972 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.289 11:04:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.191 11:04:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.191 00:18:21.191 real 0m15.934s 00:18:21.191 user 0m22.230s 00:18:21.191 sys 0m3.117s 00:18:21.191 11:04:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:21.191 11:04:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:21.191 ************************************ 00:18:21.191 END TEST nvmf_queue_depth 00:18:21.191 ************************************ 00:18:21.191 11:04:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:21.191 11:04:35 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:21.191 11:04:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:21.191 11:04:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.191 11:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:21.450 ************************************ 00:18:21.450 START TEST nvmf_target_multipath 00:18:21.450 ************************************ 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:21.450 * Looking for test storage... 00:18:21.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.450 11:04:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:21.451 11:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:23.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:23.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:23.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:23.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.357 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:23.617 00:18:23.617 --- 10.0.0.2 ping statistics --- 00:18:23.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.617 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:18:23.617 00:18:23.617 --- 10.0.0.1 ping statistics --- 00:18:23.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.617 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:23.617 only one NIC for nvmf test 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.617 rmmod nvme_tcp 00:18:23.617 rmmod nvme_fabrics 00:18:23.617 rmmod nvme_keyring 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.617 11:04:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.158 00:18:26.158 real 0m4.348s 00:18:26.158 user 0m0.837s 00:18:26.158 sys 0m1.514s 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.158 11:04:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:26.158 ************************************ 00:18:26.158 END TEST nvmf_target_multipath 00:18:26.158 ************************************ 00:18:26.158 11:04:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:26.158 11:04:40 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:26.158 11:04:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:26.158 11:04:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.158 11:04:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.158 ************************************ 00:18:26.158 START TEST nvmf_zcopy 00:18:26.158 ************************************ 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:26.158 * Looking for test storage... 00:18:26.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.158 11:04:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.159 11:04:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.065 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.065 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.065 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.065 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.065 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.065 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:18:28.066 00:18:28.066 --- 10.0.0.2 ping statistics --- 00:18:28.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.066 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:18:28.066 00:18:28.066 --- 10.0.0.1 ping statistics --- 00:18:28.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.066 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=243155 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 243155 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 243155 ']' 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.066 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 [2024-07-11 11:04:42.324566] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:28.066 [2024-07-11 11:04:42.324644] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.066 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.066 [2024-07-11 11:04:42.389517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.066 [2024-07-11 11:04:42.479112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.066 [2024-07-11 11:04:42.479167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.066 [2024-07-11 11:04:42.479181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.066 [2024-07-11 11:04:42.479193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.066 [2024-07-11 11:04:42.479203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.066 [2024-07-11 11:04:42.479231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 [2024-07-11 11:04:42.611182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 [2024-07-11 11:04:42.627355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 malloc0 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:28.345 { 00:18:28.345 "params": { 00:18:28.345 "name": "Nvme$subsystem", 00:18:28.345 "trtype": "$TEST_TRANSPORT", 00:18:28.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.345 "adrfam": "ipv4", 00:18:28.345 "trsvcid": "$NVMF_PORT", 00:18:28.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.345 "hdgst": ${hdgst:-false}, 00:18:28.345 "ddgst": ${ddgst:-false} 00:18:28.345 }, 00:18:28.345 "method": "bdev_nvme_attach_controller" 00:18:28.345 } 00:18:28.345 EOF 00:18:28.345 )") 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:28.345 11:04:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:28.345 "params": { 00:18:28.345 "name": "Nvme1", 00:18:28.345 "trtype": "tcp", 00:18:28.345 "traddr": "10.0.0.2", 00:18:28.345 "adrfam": "ipv4", 00:18:28.345 "trsvcid": "4420", 00:18:28.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.345 "hdgst": false, 00:18:28.345 "ddgst": false 00:18:28.345 }, 00:18:28.345 "method": "bdev_nvme_attach_controller" 00:18:28.345 }' 00:18:28.345 [2024-07-11 11:04:42.702663] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:28.345 [2024-07-11 11:04:42.702768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243179 ] 00:18:28.345 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.615 [2024-07-11 11:04:42.761673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.615 [2024-07-11 11:04:42.846346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.900 Running I/O for 10 seconds... 00:18:38.902 00:18:38.902 Latency(us) 00:18:38.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:38.902 Verification LBA range: start 0x0 length 0x1000 00:18:38.902 Nvme1n1 : 10.02 6019.86 47.03 0.00 0.00 21203.72 2111.72 31263.10 00:18:38.902 =================================================================================================================== 00:18:38.902 Total : 6019.86 47.03 0.00 0.00 21203.72 2111.72 31263.10 00:18:38.902 11:04:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=244380 00:18:38.902 11:04:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:38.902 11:04:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:38.902 11:04:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:38.902 11:04:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:38.902 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:39.163 { 00:18:39.163 "params": { 00:18:39.163 "name": "Nvme$subsystem", 00:18:39.163 "trtype": "$TEST_TRANSPORT", 00:18:39.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.163 "adrfam": "ipv4", 00:18:39.163 "trsvcid": "$NVMF_PORT", 00:18:39.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.163 "hdgst": ${hdgst:-false}, 00:18:39.163 "ddgst": ${ddgst:-false} 00:18:39.163 }, 00:18:39.163 "method": "bdev_nvme_attach_controller" 00:18:39.163 } 00:18:39.163 EOF 00:18:39.163 )") 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:39.163 [2024-07-11 11:04:53.331780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.331831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:39.163 11:04:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:39.163 "params": { 00:18:39.163 "name": "Nvme1", 00:18:39.163 "trtype": "tcp", 00:18:39.163 "traddr": "10.0.0.2", 00:18:39.163 "adrfam": "ipv4", 00:18:39.163 "trsvcid": "4420", 00:18:39.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.163 "hdgst": false, 00:18:39.163 "ddgst": false 00:18:39.163 }, 00:18:39.163 "method": "bdev_nvme_attach_controller" 00:18:39.163 }' 00:18:39.163 [2024-07-11 11:04:53.339720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.339767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.347785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.347810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.355794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.355832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.363819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.363842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.370491] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:39.163 [2024-07-11 11:04:53.370576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244380 ] 00:18:39.163 [2024-07-11 11:04:53.372008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.372045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.380027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.380074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.388064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.388084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.396089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.396124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.163 [2024-07-11 11:04:53.404104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.404140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.412148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.412169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.420156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.420176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.428154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.428174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.433017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.163 [2024-07-11 11:04:53.436194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.436216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.444248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.444286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.452234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.452256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.460252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.460273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.468273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.468293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.476295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.476317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.484350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.484384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.492360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.492387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.500362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.500382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.508383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.508403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.516407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.516427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.520427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.163 [2024-07-11 11:04:53.524430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.524451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.532456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.532480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.163 [2024-07-11 11:04:53.540511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.163 [2024-07-11 11:04:53.540549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-11 11:04:53.548529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-11 11:04:53.548566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-11 11:04:53.556553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-11 11:04:53.556590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-11 11:04:53.564573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-11 11:04:53.564610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-11 11:04:53.572592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-11 11:04:53.572630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.164 [2024-07-11 11:04:53.580598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.164 [2024-07-11 11:04:53.580629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-11 11:04:53.588608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-11 11:04:53.588647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-11 11:04:53.596656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-11 11:04:53.596692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-11 11:04:53.604676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.423 [2024-07-11 11:04:53.604711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.423 [2024-07-11 11:04:53.612667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.612689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.620686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.620706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.628834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.628862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.636777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.636801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.644806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.644831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.652833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.652857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.660851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.660874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.668875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.668898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.676894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.676915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.684919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.684940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.692945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.692967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.700967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.700989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.708996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.709020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.717012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.717048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.725032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.725066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.733069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.733089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.741090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.741124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.749117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.749141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.757144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.757166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.765152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.765172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.773175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.773194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.781196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.781216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.789220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.789239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.797254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.797275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.805278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.805300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.813323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.813347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 Running I/O for 5 seconds... 00:18:39.424 [2024-07-11 11:04:53.821356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.821394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.834012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.834041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.424 [2024-07-11 11:04:53.844441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.424 [2024-07-11 11:04:53.844471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.855491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.855530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.866058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.866085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.876865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.876894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.889048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.889076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.899057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.899086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.909811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.909841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.920036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.682 [2024-07-11 11:04:53.920064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.682 [2024-07-11 11:04:53.930434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.930461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:53.941135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.941163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:53.951596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.951623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:53.963994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.964021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:53.974229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.974256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:53.984962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.984990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:53.997034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:53.997062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.006438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.006466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.017142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.017171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.029848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.029875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.039815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.039842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.050104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.050131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.060683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.060717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.071703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.071730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.082305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.082333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.092836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.092863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.683 [2024-07-11 11:04:54.102968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.683 [2024-07-11 11:04:54.102996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.113821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.113850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.126319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.126347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.138156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.138183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.147360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.147388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.158553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.158581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.169335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.169364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.179919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.179946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.190429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.190456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.201115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.201143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.213514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.213542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.223791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.223818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.234338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.234366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.244550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.244578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.254798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.254839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.265077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.265114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.275597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.275624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.285650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.285676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.295704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.295731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.306349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.306377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.316869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.316896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.326935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.326963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.337405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.337432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.348017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.348045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.942 [2024-07-11 11:04:54.358458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.942 [2024-07-11 11:04:54.358486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.370738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.370777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.381043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.381069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.391195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.391222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.401725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.401760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.415371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.415399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.425277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.425304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.435918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.435946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.448404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.448433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.458426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.458455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.469074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.469110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.479534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.479563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.489772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.489812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.500635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.500663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.511340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.511368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.522335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.522363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.533064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.533091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.543270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.543298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.553723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.553750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.564337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.564365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.574790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.574817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.585096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.585124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.595509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.595538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.606083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.606111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.201 [2024-07-11 11:04:54.616450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.201 [2024-07-11 11:04:54.616479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.627103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.627131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.637748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.637783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.648355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.648382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.661310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.661338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.671222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.671250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.681703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.681731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.692259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.692287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.702751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.702788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.713061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.713088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.723522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.723550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.733961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.733989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.744522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.744550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.755226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.755254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.765677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.461 [2024-07-11 11:04:54.765704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.461 [2024-07-11 11:04:54.776361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.776388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.787100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.787127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.799218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.799246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.808942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.808969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.819137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.819165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.829760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.829787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.840479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.840508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.850845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.850872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.861827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.861854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.875156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.875183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.462 [2024-07-11 11:04:54.885377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.462 [2024-07-11 11:04:54.885405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.896257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.896285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.907181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.907210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.917958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.917986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.930526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.930553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.940804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.940831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.951140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.951167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.961558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.961585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.972177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.972204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.984313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.984341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:54.994254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:54.994281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.004845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.004873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.017391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.017420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.027271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.027298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.038020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.038048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.048648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.048676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.059087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.059115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.071637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.071664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.081604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.081631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.092149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.092176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.722 [2024-07-11 11:04:55.102489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.722 [2024-07-11 11:04:55.102516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.723 [2024-07-11 11:04:55.112888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.723 [2024-07-11 11:04:55.112915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.723 [2024-07-11 11:04:55.123473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.723 [2024-07-11 11:04:55.123501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.723 [2024-07-11 11:04:55.133789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.723 [2024-07-11 11:04:55.133816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.723 [2024-07-11 11:04:55.144608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.723 [2024-07-11 11:04:55.144636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.155166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.155195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.165687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.165715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.175866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.175894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.186471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.186499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.198914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.198942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.208777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.208804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.219503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.219531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.232935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.232963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.245182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.245209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.255134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.255162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.266230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.266258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.279079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.279106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.289260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.289288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.299445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.299472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.309611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.309639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.320392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.320420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.332994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.333022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.343136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.343163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.353774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.353802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.364203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.364229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.374807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.374834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.385330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.385372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.983 [2024-07-11 11:04:55.398394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.983 [2024-07-11 11:04:55.398422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.408378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.408408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.418990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.419018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.429376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.429403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.439783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.439811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.450494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.450521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.462705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.462733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.242 [2024-07-11 11:04:55.472915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.242 [2024-07-11 11:04:55.472943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.483432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.483468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.493993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.494020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.504799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.504827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.518306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.518333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.528842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.528869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.539085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.539113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.549253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.549280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.559886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.559914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.572270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.572298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.584257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.584284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.593634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.593663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.605373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.605401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.618348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.618376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.628646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.628674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.638982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.639010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.649950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.649978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.243 [2024-07-11 11:04:55.662166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.243 [2024-07-11 11:04:55.662193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.672083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.672111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.682603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.682631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.692932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.692966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.703581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.703609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.714986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.715014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.726175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.726203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.736674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.736701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.747346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.747374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.757935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.757963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.768164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.768192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.778701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.778729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.789020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.789047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.799245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.799272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.809887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.809915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.820068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.820096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.830608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.830636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.841024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.841051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.851580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.851608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.862162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.862189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.872913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.872941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.883799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.883826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.894340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.894376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.906664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.906692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.916660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.916689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.503 [2024-07-11 11:04:55.926940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.503 [2024-07-11 11:04:55.926969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:55.937497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:55.937525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:55.948029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:55.948057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:55.960654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:55.960681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:55.970747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:55.970782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:55.981162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:55.981189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:55.991359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:55.991387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.002161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.002189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.014435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.014463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.023465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.023492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.035612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.035640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.045625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.045653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.056028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.056056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.066360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.066388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.076911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.076938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.087094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.087121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.097618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.097654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.110700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.110728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.120792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.120819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.131155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.131183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.141447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.141475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.151368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.151396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.161743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.161779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.171983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.172011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.764 [2024-07-11 11:04:56.182579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.764 [2024-07-11 11:04:56.182606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.023 [2024-07-11 11:04:56.193254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.023 [2024-07-11 11:04:56.193283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.023 [2024-07-11 11:04:56.203605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.023 [2024-07-11 11:04:56.203633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.023 [2024-07-11 11:04:56.213548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.023 [2024-07-11 11:04:56.213576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.023 [2024-07-11 11:04:56.224054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.023 [2024-07-11 11:04:56.224082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.023 [2024-07-11 11:04:56.236507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.023 [2024-07-11 11:04:56.236534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.247121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.247149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.257506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.257534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.270007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.270035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.279978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.280006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.290274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.290301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.300772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.300799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.311339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.311366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.321707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.321735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.332093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.332120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.342562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.342589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.353040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.353067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.363466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.363493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.374326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.374354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.386760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.386787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.396637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.396665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.406859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.406887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.417246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.417274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.427576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.427604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.438020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.438049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.024 [2024-07-11 11:04:56.448436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.024 [2024-07-11 11:04:56.448464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.458781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.458809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.469706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.469733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.480951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.480979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.491873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.491901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.502685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.502713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.515029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.515058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.524821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.524849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.535357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.535384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.545919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.545947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.558324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.558352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.568264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.568291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.579016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.579044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.589385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.589413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.600322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.600349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.611115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.611142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.621368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.621396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.631911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.631939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.642140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.642169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.652959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.652987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.665850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.665877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.676101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.676130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.686594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.686622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.699235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.699263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.284 [2024-07-11 11:04:56.708936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.284 [2024-07-11 11:04:56.708964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.719394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.719423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.731826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.731854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.741838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.741866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.752576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.752605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.765395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.765423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.775557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.775585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.785516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.785544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.796268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.796296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.808562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.808590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.820201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.820228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.828905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.828933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.840313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.840341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.851225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.851252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.862004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.862032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.875186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.875214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.885486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.885515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.895975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.896003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.908631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.908659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.920305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.920332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.929455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.929483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.940635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.940663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.951293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.951321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.545 [2024-07-11 11:04:56.961993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.545 [2024-07-11 11:04:56.962021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:56.972268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:56.972306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:56.982428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:56.982456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:56.992857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:56.992885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.003375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.003403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.013858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.013886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.024392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.024419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.034807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.034836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.045547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.045575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.058720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.058747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.068848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.068882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.079467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.079495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.091948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.091975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.101578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.101605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.112145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.112181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.122959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.122987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.136014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.136042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.146460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.146488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.156929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.156957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.167145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.167172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.177944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.177972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.188720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.188749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.199222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.199251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.209968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.209997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.806 [2024-07-11 11:04:57.222403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.806 [2024-07-11 11:04:57.222431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.232648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.232677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.243123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.243150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.253588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.253615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.264099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.264127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.274108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.274136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.285061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.285088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.297595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.297622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.307314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.307342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.318079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.318114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.328704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.328732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.339409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.339437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.352012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.352040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.362054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.362081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.372994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.373022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.383610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.383638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.393862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.393889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.404132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.404160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.415213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.415241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.428164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.066 [2024-07-11 11:04:57.428191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.066 [2024-07-11 11:04:57.439894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.067 [2024-07-11 11:04:57.439922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.067 [2024-07-11 11:04:57.449106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.067 [2024-07-11 11:04:57.449134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.067 [2024-07-11 11:04:57.460673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.067 [2024-07-11 11:04:57.460700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.067 [2024-07-11 11:04:57.473083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.067 [2024-07-11 11:04:57.473110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.067 [2024-07-11 11:04:57.483345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.067 [2024-07-11 11:04:57.483372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.493720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.493749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.504023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.504051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.514764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.514792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.527351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.527386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.537182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.537209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.547946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.547974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.560227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.560255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.570427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.570455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.580914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.580941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.591621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.591649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.602660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.602687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.614684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.614712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.624793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.624821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.635405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.635433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.645894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.327 [2024-07-11 11:04:57.645922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.327 [2024-07-11 11:04:57.658084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.658111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.667874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.667902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.678812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.678840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.689353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.689380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.699555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.699583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.709961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.709988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.720345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.720373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.730740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.730783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.328 [2024-07-11 11:04:57.741251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.328 [2024-07-11 11:04:57.741278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.752256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.752286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.762791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.762818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.776312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.776339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.786418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.786446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.797420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.797449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.810144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.810172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.820419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.820447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.830604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.830632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.840996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.841035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.851459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.851487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.861997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.862033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.872234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.872261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.882232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.882259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.892805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.892833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.905384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.905412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.915709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.915737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.926187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.926215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.938463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.938492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.948459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.948487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.959090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.959118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.969555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.969583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.979941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.979969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:57.990442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:57.990479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:58.000896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:58.000926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.587 [2024-07-11 11:04:58.011196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.587 [2024-07-11 11:04:58.011224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.022101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.022129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.038896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.038924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.048727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.048762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.059499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.059526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.070392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.070420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.081088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.081115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.092067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.092095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.102824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.102851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.115509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.115536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.125997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.126024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.136505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.136532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.147544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.147571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.158353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.158380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.169092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.169119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.179575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.179602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.192809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.192835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.204668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.204695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.214001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.214029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.224682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.224710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.235252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.235279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.246159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.246185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.256952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.256979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.848 [2024-07-11 11:04:58.269053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.848 [2024-07-11 11:04:58.269080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.278774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.278801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.289576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.289603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.300197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.300224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.310739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.310774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.323969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.323995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.335884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.335911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.344640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.344667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.356000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.356026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.368340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.368366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.108 [2024-07-11 11:04:58.377779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.108 [2024-07-11 11:04:58.377806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.388392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.388418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.400223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.400250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.410214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.410241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.420937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.420963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.431473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.431499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.442296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.442323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.455870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.455896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.466413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.466441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.476784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.476811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.486790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.486816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.497568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.497596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.510550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.510577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.520616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.520643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.109 [2024-07-11 11:04:58.531366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.109 [2024-07-11 11:04:58.531394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.545090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.370 [2024-07-11 11:04:58.545118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.556717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.370 [2024-07-11 11:04:58.556744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.566043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.370 [2024-07-11 11:04:58.566071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.576984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.370 [2024-07-11 11:04:58.577011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.589157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.370 [2024-07-11 11:04:58.589184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.598681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.370 [2024-07-11 11:04:58.598708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.370 [2024-07-11 11:04:58.609059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.609086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.619890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.619917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.632809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.632835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.642620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.642646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.653310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.653338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.663927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.663954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.674291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.674317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.685355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.685382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.697945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.697972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.708010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.708037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.718327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.718354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.728919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.728946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.741453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.741480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.751595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.751622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.762120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.762155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.772647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.772688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.783045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.783071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.371 [2024-07-11 11:04:58.793314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.371 [2024-07-11 11:04:58.793341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.803812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.803840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.814384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.814410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.824767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.824794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.835333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.835360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.843566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.843589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 00:18:44.630 Latency(us) 00:18:44.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.630 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:44.630 Nvme1n1 : 5.01 12060.28 94.22 0.00 0.00 10600.61 4684.61 20000.62 00:18:44.630 =================================================================================================================== 00:18:44.630 Total : 12060.28 94.22 0.00 0.00 10600.61 4684.61 20000.62 00:18:44.630 [2024-07-11 11:04:58.851587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.851610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.859606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.859629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.867692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.867740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.875716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.875773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.883731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.883788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.891747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.891797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.899772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.899819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.907805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.907866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.915821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.915868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.923845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.923893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.931870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.931916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.939892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.939941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.947908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.947954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.955939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.955986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.963957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.964005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.971978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.972027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.979976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.980019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.987962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.987988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:58.995996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:58.996025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:59.004063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:59.004112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:59.012071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.630 [2024-07-11 11:04:59.012115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.630 [2024-07-11 11:04:59.020085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.631 [2024-07-11 11:04:59.020134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.631 [2024-07-11 11:04:59.028073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.631 [2024-07-11 11:04:59.028097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.631 [2024-07-11 11:04:59.036143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.631 [2024-07-11 11:04:59.036185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.631 [2024-07-11 11:04:59.044166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.631 [2024-07-11 11:04:59.044212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.631 [2024-07-11 11:04:59.052148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.631 [2024-07-11 11:04:59.052171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.890 [2024-07-11 11:04:59.060166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.890 [2024-07-11 11:04:59.060195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.890 [2024-07-11 11:04:59.068192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.890 [2024-07-11 11:04:59.068213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (244380) - No such process 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 244380 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:44.890 delay0 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.890 11:04:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:44.890 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.890 [2024-07-11 11:04:59.181970] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:51.460 [2024-07-11 11:05:05.373774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a290 is same with the state(5) to be set 00:18:51.460 [2024-07-11 11:05:05.373835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a290 is same with the state(5) to be set 00:18:51.460 Initializing NVMe Controllers 00:18:51.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:51.460 Initialization complete. Launching workers. 00:18:51.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 161 00:18:51.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 448, failed to submit 33 00:18:51.460 success 295, unsuccess 153, failed 0 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.460 rmmod nvme_tcp 00:18:51.460 rmmod nvme_fabrics 00:18:51.460 rmmod nvme_keyring 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 243155 ']' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 243155 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 243155 ']' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 243155 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 243155 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 243155' 00:18:51.460 killing process with pid 243155 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 243155 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 243155 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.460 11:05:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.364 11:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.365 00:18:53.365 real 0m27.726s 00:18:53.365 user 0m41.844s 00:18:53.365 sys 0m7.470s 00:18:53.365 11:05:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.365 11:05:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.365 ************************************ 00:18:53.365 END TEST nvmf_zcopy 00:18:53.365 ************************************ 00:18:53.365 11:05:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:53.365 11:05:07 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.365 11:05:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.365 11:05:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.365 11:05:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:53.624 ************************************ 00:18:53.624 START TEST nvmf_nmic 00:18:53.624 ************************************ 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.624 * Looking for test storage... 00:18:53.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.624 11:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.528 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.529 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:55.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:55.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:55.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:55.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.788 11:05:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:18:55.788 00:18:55.788 --- 10.0.0.2 ping statistics --- 00:18:55.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.788 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:18:55.788 00:18:55.788 --- 10.0.0.1 ping statistics --- 00:18:55.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.788 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=247756 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 247756 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 247756 ']' 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.788 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.788 [2024-07-11 11:05:10.180913] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:18:55.788 [2024-07-11 11:05:10.181012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.046 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.047 [2024-07-11 11:05:10.244917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.047 [2024-07-11 11:05:10.328462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.047 [2024-07-11 11:05:10.328506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.047 [2024-07-11 11:05:10.328529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.047 [2024-07-11 11:05:10.328539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.047 [2024-07-11 11:05:10.328553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.047 [2024-07-11 11:05:10.328645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.047 [2024-07-11 11:05:10.328711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.047 [2024-07-11 11:05:10.328830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.047 [2024-07-11 11:05:10.328833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.047 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.047 [2024-07-11 11:05:10.468354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 Malloc0 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 [2024-07-11 11:05:10.519604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:56.305 test case1: single bdev can't be used in multiple subsystems 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.305 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.305 [2024-07-11 11:05:10.543517] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:56.305 [2024-07-11 11:05:10.543563] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:56.305 [2024-07-11 11:05:10.543586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.305 request: 00:18:56.305 { 00:18:56.305 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:56.305 "namespace": { 00:18:56.305 "bdev_name": "Malloc0", 00:18:56.305 "no_auto_visible": false 00:18:56.305 }, 00:18:56.305 "method": "nvmf_subsystem_add_ns", 00:18:56.305 "req_id": 1 00:18:56.305 } 00:18:56.305 Got JSON-RPC error response 00:18:56.305 response: 00:18:56.305 { 00:18:56.305 "code": -32602, 00:18:56.305 "message": "Invalid parameters" 00:18:56.305 } 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:56.306 Adding namespace failed - expected result. 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:56.306 test case2: host connect to nvmf target in multiple paths 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.306 [2024-07-11 11:05:10.551604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.306 11:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:56.874 11:05:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:57.444 11:05:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:57.444 11:05:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:57.444 11:05:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.444 11:05:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:57.444 11:05:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:59.978 11:05:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:59.978 [global] 00:18:59.978 thread=1 00:18:59.978 invalidate=1 00:18:59.978 rw=write 00:18:59.978 time_based=1 00:18:59.978 runtime=1 00:18:59.978 ioengine=libaio 00:18:59.978 direct=1 00:18:59.978 bs=4096 00:18:59.978 iodepth=1 00:18:59.978 norandommap=0 00:18:59.978 numjobs=1 00:18:59.978 00:18:59.978 verify_dump=1 00:18:59.978 verify_backlog=512 00:18:59.978 verify_state_save=0 00:18:59.978 do_verify=1 00:18:59.978 verify=crc32c-intel 00:18:59.978 [job0] 00:18:59.978 filename=/dev/nvme0n1 00:18:59.978 Could not set queue depth (nvme0n1) 00:18:59.978 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.978 fio-3.35 00:18:59.978 Starting 1 thread 00:19:01.356 00:19:01.356 job0: (groupid=0, jobs=1): err= 0: pid=248392: Thu Jul 11 11:05:15 2024 00:19:01.356 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:19:01.356 slat (nsec): min=7463, max=35650, avg=28187.22, stdev=8865.04 00:19:01.356 clat (usec): min=339, max=41993, avg=39481.23, stdev=8544.31 00:19:01.356 lat (usec): min=373, max=42015, avg=39509.42, stdev=8543.30 00:19:01.356 clat percentiles (usec): 00:19:01.356 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:01.356 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:01.356 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:01.356 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:01.356 | 99.99th=[42206] 00:19:01.356 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:19:01.356 slat (usec): min=7, max=30658, avg=78.74, stdev=1354.13 00:19:01.356 clat (usec): min=129, max=236, avg=168.78, stdev=14.60 00:19:01.356 lat (usec): min=138, max=30895, avg=247.52, stdev=1357.24 00:19:01.356 clat percentiles (usec): 00:19:01.356 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 149], 20.00th=[ 161], 00:19:01.356 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:19:01.356 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:19:01.356 | 99.00th=[ 202], 99.50th=[ 219], 99.90th=[ 237], 99.95th=[ 237], 00:19:01.356 | 99.99th=[ 237] 00:19:01.356 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.356 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.356 lat (usec) : 250=95.70%, 500=0.19% 00:19:01.356 lat (msec) : 50=4.11% 00:19:01.356 cpu : usr=0.67%, sys=1.16%, ctx=537, majf=0, minf=2 00:19:01.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.356 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.356 00:19:01.356 Run status group 0 (all jobs): 00:19:01.356 READ: bw=88.5KiB/s (90.7kB/s), 88.5KiB/s-88.5KiB/s (90.7kB/s-90.7kB/s), io=92.0KiB (94.2kB), run=1039-1039msec 00:19:01.356 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:19:01.356 00:19:01.356 Disk stats (read/write): 00:19:01.356 nvme0n1: ios=44/512, merge=0/0, ticks=1702/73, in_queue=1775, util=98.70% 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:01.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.356 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.357 rmmod nvme_tcp 00:19:01.357 rmmod nvme_fabrics 00:19:01.357 rmmod nvme_keyring 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 247756 ']' 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 247756 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 247756 ']' 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 247756 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 247756 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 247756' 00:19:01.357 killing process with pid 247756 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 247756 00:19:01.357 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 247756 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.618 11:05:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.523 11:05:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.523 00:19:03.523 real 0m10.077s 00:19:03.523 user 0m22.761s 00:19:03.523 sys 0m2.522s 00:19:03.523 11:05:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:03.523 11:05:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:03.523 ************************************ 00:19:03.523 END TEST nvmf_nmic 00:19:03.523 ************************************ 00:19:03.523 11:05:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:03.523 11:05:17 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:03.523 11:05:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:03.523 11:05:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.523 11:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.523 ************************************ 00:19:03.523 START TEST nvmf_fio_target 00:19:03.523 ************************************ 00:19:03.523 11:05:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:03.781 * Looking for test storage... 00:19:03.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.781 11:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.781 11:05:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:03.781 11:05:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.781 11:05:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.781 11:05:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.782 11:05:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:05.686 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.687 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:05.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.945 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:05.946 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:05.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:05.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:19:05.946 00:19:05.946 --- 10.0.0.2 ping statistics --- 00:19:05.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.946 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:19:05.946 00:19:05.946 --- 10.0.0.1 ping statistics --- 00:19:05.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.946 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=250468 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 250468 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 250468 ']' 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.946 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.946 [2024-07-11 11:05:20.323234] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:19:05.946 [2024-07-11 11:05:20.323303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.946 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.205 [2024-07-11 11:05:20.386088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.205 [2024-07-11 11:05:20.472046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.205 [2024-07-11 11:05:20.472099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.205 [2024-07-11 11:05:20.472112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.205 [2024-07-11 11:05:20.472123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.205 [2024-07-11 11:05:20.472133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.205 [2024-07-11 11:05:20.472212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.205 [2024-07-11 11:05:20.472278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.205 [2024-07-11 11:05:20.472307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.205 [2024-07-11 11:05:20.472309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.205 11:05:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:06.771 [2024-07-11 11:05:20.900631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.771 11:05:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.031 11:05:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:07.031 11:05:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.290 11:05:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:07.290 11:05:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.549 11:05:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:07.549 11:05:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.807 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:07.807 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:08.066 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.324 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:08.324 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.581 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:08.581 11:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.839 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:08.839 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:09.097 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:09.360 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:09.360 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.617 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:09.617 11:05:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:09.874 11:05:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.874 [2024-07-11 11:05:24.298197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.132 11:05:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:10.391 11:05:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:10.391 11:05:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:11.326 11:05:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:11.326 11:05:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:11.326 11:05:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.326 11:05:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:11.326 11:05:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:11.326 11:05:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:13.233 11:05:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:13.233 [global] 00:19:13.233 thread=1 00:19:13.233 invalidate=1 00:19:13.233 rw=write 00:19:13.233 time_based=1 00:19:13.233 runtime=1 00:19:13.233 ioengine=libaio 00:19:13.233 direct=1 00:19:13.233 bs=4096 00:19:13.233 iodepth=1 00:19:13.233 norandommap=0 00:19:13.233 numjobs=1 00:19:13.233 00:19:13.233 verify_dump=1 00:19:13.233 verify_backlog=512 00:19:13.233 verify_state_save=0 00:19:13.233 do_verify=1 00:19:13.233 verify=crc32c-intel 00:19:13.233 [job0] 00:19:13.233 filename=/dev/nvme0n1 00:19:13.233 [job1] 00:19:13.233 filename=/dev/nvme0n2 00:19:13.233 [job2] 00:19:13.233 filename=/dev/nvme0n3 00:19:13.233 [job3] 00:19:13.233 filename=/dev/nvme0n4 00:19:13.233 Could not set queue depth (nvme0n1) 00:19:13.233 Could not set queue depth (nvme0n2) 00:19:13.233 Could not set queue depth (nvme0n3) 00:19:13.233 Could not set queue depth (nvme0n4) 00:19:13.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.491 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.491 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.491 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.491 fio-3.35 00:19:13.491 Starting 4 threads 00:19:14.868 00:19:14.868 job0: (groupid=0, jobs=1): err= 0: pid=251530: Thu Jul 11 11:05:28 2024 00:19:14.868 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:14.868 slat (nsec): min=5567, max=33956, avg=8017.45, stdev=2878.11 00:19:14.868 clat (usec): min=199, max=41772, avg=397.29, stdev=2563.72 00:19:14.868 lat (usec): min=205, max=41779, avg=405.31, stdev=2564.07 00:19:14.868 clat percentiles (usec): 00:19:14.868 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:19:14.868 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:19:14.868 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:19:14.868 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[41681], 99.95th=[41681], 00:19:14.868 | 99.99th=[41681] 00:19:14.868 write: IOPS=1951, BW=7804KiB/s (7991kB/s)(7812KiB/1001msec); 0 zone resets 00:19:14.868 slat (nsec): min=7059, max=54107, avg=12211.61, stdev=6359.67 00:19:14.868 clat (usec): min=139, max=312, avg=175.78, stdev=19.50 00:19:14.868 lat (usec): min=147, max=351, avg=188.00, stdev=24.46 00:19:14.868 clat percentiles (usec): 00:19:14.868 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:19:14.868 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:19:14.868 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:19:14.868 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 277], 99.95th=[ 314], 00:19:14.868 | 99.99th=[ 314] 00:19:14.868 bw ( KiB/s): min= 8192, max= 8192, per=35.61%, avg=8192.00, stdev= 0.00, samples=1 00:19:14.868 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:14.868 lat (usec) : 250=90.11%, 500=9.72% 00:19:14.868 lat (msec) : 50=0.17% 00:19:14.868 cpu : usr=3.30%, sys=4.40%, ctx=3489, majf=0, minf=1 00:19:14.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.868 issued rwts: total=1536,1953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.868 job1: (groupid=0, jobs=1): err= 0: pid=251531: Thu Jul 11 11:05:28 2024 00:19:14.868 read: IOPS=1061, BW=4248KiB/s (4350kB/s)(4252KiB/1001msec) 00:19:14.868 slat (nsec): min=5476, max=33999, avg=7365.71, stdev=2828.77 00:19:14.868 clat (usec): min=176, max=42188, avg=647.70, stdev=4164.16 00:19:14.868 lat (usec): min=182, max=42197, avg=655.06, stdev=4164.81 00:19:14.868 clat percentiles (usec): 00:19:14.868 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:19:14.868 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 227], 00:19:14.868 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 269], 00:19:14.868 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:14.868 | 99.99th=[42206] 00:19:14.868 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:14.868 slat (nsec): min=6752, max=66178, avg=12322.18, stdev=6234.92 00:19:14.868 clat (usec): min=124, max=1068, avg=180.57, stdev=59.67 00:19:14.868 lat (usec): min=131, max=1079, avg=192.89, stdev=61.70 00:19:14.868 clat percentiles (usec): 00:19:14.868 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:19:14.868 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 172], 00:19:14.868 | 70.00th=[ 182], 80.00th=[ 196], 90.00th=[ 255], 95.00th=[ 297], 00:19:14.868 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 1057], 99.95th=[ 1074], 00:19:14.868 | 99.99th=[ 1074] 00:19:14.868 bw ( KiB/s): min= 4096, max= 4096, per=17.81%, avg=4096.00, stdev= 0.00, samples=1 00:19:14.868 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:14.868 lat (usec) : 250=86.19%, 500=13.27% 00:19:14.868 lat (msec) : 2=0.12%, 50=0.42% 00:19:14.868 cpu : usr=2.40%, sys=3.30%, ctx=2599, majf=0, minf=2 00:19:14.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.868 issued rwts: total=1063,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.868 job2: (groupid=0, jobs=1): err= 0: pid=251532: Thu Jul 11 11:05:28 2024 00:19:14.868 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:14.868 slat (nsec): min=5621, max=37983, avg=7954.01, stdev=3281.00 00:19:14.868 clat (usec): min=186, max=42068, avg=635.14, stdev=4017.04 00:19:14.868 lat (usec): min=192, max=42075, avg=643.09, stdev=4017.15 00:19:14.868 clat percentiles (usec): 00:19:14.869 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:19:14.869 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 237], 00:19:14.869 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 359], 00:19:14.869 | 99.00th=[ 2737], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:19:14.869 | 99.99th=[42206] 00:19:14.869 write: IOPS=1247, BW=4991KiB/s (5111kB/s)(4996KiB/1001msec); 0 zone resets 00:19:14.869 slat (usec): min=7, max=40637, avg=61.83, stdev=1247.14 00:19:14.869 clat (usec): min=133, max=468, avg=203.81, stdev=64.25 00:19:14.869 lat (usec): min=141, max=41001, avg=265.64, stdev=1255.29 00:19:14.869 clat percentiles (usec): 00:19:14.869 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:19:14.869 | 30.00th=[ 159], 40.00th=[ 169], 50.00th=[ 184], 60.00th=[ 192], 00:19:14.869 | 70.00th=[ 212], 80.00th=[ 255], 90.00th=[ 302], 95.00th=[ 334], 00:19:14.869 | 99.00th=[ 404], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 469], 00:19:14.869 | 99.99th=[ 469] 00:19:14.869 bw ( KiB/s): min= 8192, max= 8192, per=35.61%, avg=8192.00, stdev= 0.00, samples=1 00:19:14.869 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:14.869 lat (usec) : 250=78.88%, 500=20.50%, 750=0.13% 00:19:14.869 lat (msec) : 4=0.04%, 50=0.44% 00:19:14.869 cpu : usr=2.10%, sys=3.60%, ctx=2277, majf=0, minf=1 00:19:14.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.869 issued rwts: total=1024,1249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.869 job3: (groupid=0, jobs=1): err= 0: pid=251533: Thu Jul 11 11:05:28 2024 00:19:14.869 read: IOPS=716, BW=2866KiB/s (2935kB/s)(2872KiB/1002msec) 00:19:14.869 slat (nsec): min=4849, max=34154, avg=8432.51, stdev=3845.20 00:19:14.869 clat (usec): min=192, max=41976, avg=1001.83, stdev=5471.29 00:19:14.869 lat (usec): min=198, max=41990, avg=1010.26, stdev=5471.98 00:19:14.869 clat percentiles (usec): 00:19:14.869 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:19:14.869 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 233], 00:19:14.869 | 70.00th=[ 239], 80.00th=[ 269], 90.00th=[ 322], 95.00th=[ 379], 00:19:14.869 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:14.869 | 99.99th=[42206] 00:19:14.869 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:19:14.869 slat (usec): min=6, max=40641, avg=71.30, stdev=1410.90 00:19:14.869 clat (usec): min=136, max=535, avg=193.52, stdev=56.42 00:19:14.869 lat (usec): min=143, max=41003, avg=264.82, stdev=1418.91 00:19:14.869 clat percentiles (usec): 00:19:14.869 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:19:14.869 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 192], 00:19:14.869 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 285], 95.00th=[ 326], 00:19:14.869 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 433], 99.95th=[ 537], 00:19:14.869 | 99.99th=[ 537] 00:19:14.869 bw ( KiB/s): min= 2848, max= 5344, per=17.81%, avg=4096.00, stdev=1764.94, samples=2 00:19:14.869 iops : min= 712, max= 1336, avg=1024.00, stdev=441.23, samples=2 00:19:14.869 lat (usec) : 250=81.80%, 500=17.34%, 750=0.06% 00:19:14.869 lat (msec) : 20=0.06%, 50=0.75% 00:19:14.869 cpu : usr=1.30%, sys=1.60%, ctx=1747, majf=0, minf=1 00:19:14.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.869 issued rwts: total=718,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.869 00:19:14.869 Run status group 0 (all jobs): 00:19:14.869 READ: bw=16.9MiB/s (17.7MB/s), 2866KiB/s-6138KiB/s (2935kB/s-6285kB/s), io=17.0MiB (17.8MB), run=1001-1002msec 00:19:14.869 WRITE: bw=22.5MiB/s (23.6MB/s), 4088KiB/s-7804KiB/s (4186kB/s-7991kB/s), io=22.5MiB (23.6MB), run=1001-1002msec 00:19:14.869 00:19:14.869 Disk stats (read/write): 00:19:14.869 nvme0n1: ios=1167/1536, merge=0/0, ticks=506/249, in_queue=755, util=81.76% 00:19:14.869 nvme0n2: ios=1018/1024, merge=0/0, ticks=655/183, in_queue=838, util=85.76% 00:19:14.869 nvme0n3: ios=975/1024, merge=0/0, ticks=817/184, in_queue=1001, util=94.86% 00:19:14.869 nvme0n4: ios=734/1024, merge=0/0, ticks=1364/193, in_queue=1557, util=99.56% 00:19:14.869 11:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:14.869 [global] 00:19:14.869 thread=1 00:19:14.869 invalidate=1 00:19:14.869 rw=randwrite 00:19:14.869 time_based=1 00:19:14.869 runtime=1 00:19:14.869 ioengine=libaio 00:19:14.869 direct=1 00:19:14.869 bs=4096 00:19:14.869 iodepth=1 00:19:14.869 norandommap=0 00:19:14.869 numjobs=1 00:19:14.869 00:19:14.869 verify_dump=1 00:19:14.869 verify_backlog=512 00:19:14.869 verify_state_save=0 00:19:14.869 do_verify=1 00:19:14.869 verify=crc32c-intel 00:19:14.869 [job0] 00:19:14.869 filename=/dev/nvme0n1 00:19:14.869 [job1] 00:19:14.869 filename=/dev/nvme0n2 00:19:14.869 [job2] 00:19:14.869 filename=/dev/nvme0n3 00:19:14.869 [job3] 00:19:14.869 filename=/dev/nvme0n4 00:19:14.869 Could not set queue depth (nvme0n1) 00:19:14.869 Could not set queue depth (nvme0n2) 00:19:14.869 Could not set queue depth (nvme0n3) 00:19:14.869 Could not set queue depth (nvme0n4) 00:19:14.869 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.869 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.869 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.869 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.869 fio-3.35 00:19:14.869 Starting 4 threads 00:19:16.243 00:19:16.243 job0: (groupid=0, jobs=1): err= 0: pid=251764: Thu Jul 11 11:05:30 2024 00:19:16.243 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:16.243 slat (nsec): min=5649, max=58563, avg=10904.49, stdev=5656.64 00:19:16.243 clat (usec): min=184, max=40766, avg=364.30, stdev=1400.72 00:19:16.243 lat (usec): min=190, max=40776, avg=375.21, stdev=1400.67 00:19:16.243 clat percentiles (usec): 00:19:16.243 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 225], 00:19:16.243 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:19:16.243 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 457], 95.00th=[ 506], 00:19:16.243 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[37487], 99.95th=[40633], 00:19:16.243 | 99.99th=[40633] 00:19:16.243 write: IOPS=1896, BW=7584KiB/s (7766kB/s)(7592KiB/1001msec); 0 zone resets 00:19:16.243 slat (nsec): min=6330, max=58590, avg=14396.95, stdev=8553.95 00:19:16.243 clat (usec): min=114, max=2658, avg=202.11, stdev=79.12 00:19:16.243 lat (usec): min=122, max=2666, avg=216.51, stdev=83.63 00:19:16.243 clat percentiles (usec): 00:19:16.243 | 1.00th=[ 128], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 157], 00:19:16.243 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 200], 00:19:16.243 | 70.00th=[ 215], 80.00th=[ 231], 90.00th=[ 255], 95.00th=[ 330], 00:19:16.243 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 474], 99.95th=[ 2671], 00:19:16.243 | 99.99th=[ 2671] 00:19:16.243 bw ( KiB/s): min= 8192, max= 8192, per=34.99%, avg=8192.00, stdev= 0.00, samples=1 00:19:16.243 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:16.243 lat (usec) : 250=61.07%, 500=35.96%, 750=2.85% 00:19:16.243 lat (msec) : 2=0.03%, 4=0.03%, 50=0.06% 00:19:16.243 cpu : usr=3.20%, sys=5.80%, ctx=3435, majf=0, minf=1 00:19:16.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 issued rwts: total=1536,1898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.244 job1: (groupid=0, jobs=1): err= 0: pid=251765: Thu Jul 11 11:05:30 2024 00:19:16.244 read: IOPS=1057, BW=4230KiB/s (4332kB/s)(4332KiB/1024msec) 00:19:16.244 slat (nsec): min=6027, max=66338, avg=17051.50, stdev=10567.65 00:19:16.244 clat (usec): min=200, max=41951, avg=608.42, stdev=3270.60 00:19:16.244 lat (usec): min=211, max=41965, avg=625.47, stdev=3270.41 00:19:16.244 clat percentiles (usec): 00:19:16.244 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 231], 20.00th=[ 269], 00:19:16.244 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 334], 60.00th=[ 351], 00:19:16.244 | 70.00th=[ 371], 80.00th=[ 416], 90.00th=[ 498], 95.00th=[ 510], 00:19:16.244 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:19:16.244 | 99.99th=[42206] 00:19:16.244 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:19:16.244 slat (nsec): min=7811, max=46882, avg=13404.49, stdev=4922.16 00:19:16.244 clat (usec): min=126, max=377, avg=204.26, stdev=28.79 00:19:16.244 lat (usec): min=136, max=403, avg=217.66, stdev=30.57 00:19:16.244 clat percentiles (usec): 00:19:16.244 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 182], 00:19:16.244 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:19:16.244 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 241], 00:19:16.244 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 351], 99.95th=[ 379], 00:19:16.244 | 99.99th=[ 379] 00:19:16.244 bw ( KiB/s): min= 4096, max= 8192, per=26.24%, avg=6144.00, stdev=2896.31, samples=2 00:19:16.244 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:16.244 lat (usec) : 250=64.18%, 500=31.77%, 750=3.78% 00:19:16.244 lat (msec) : 50=0.27% 00:19:16.244 cpu : usr=1.96%, sys=4.50%, ctx=2620, majf=0, minf=1 00:19:16.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 issued rwts: total=1083,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.244 job2: (groupid=0, jobs=1): err= 0: pid=251766: Thu Jul 11 11:05:30 2024 00:19:16.244 read: IOPS=1870, BW=7481KiB/s (7660kB/s)(7488KiB/1001msec) 00:19:16.244 slat (nsec): min=4508, max=64918, avg=12711.17, stdev=9400.62 00:19:16.244 clat (usec): min=190, max=901, avg=284.58, stdev=72.69 00:19:16.244 lat (usec): min=196, max=907, avg=297.29, stdev=77.13 00:19:16.244 clat percentiles (usec): 00:19:16.244 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 231], 00:19:16.244 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 277], 00:19:16.244 | 70.00th=[ 306], 80.00th=[ 347], 90.00th=[ 375], 95.00th=[ 420], 00:19:16.244 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 881], 99.95th=[ 898], 00:19:16.244 | 99.99th=[ 898] 00:19:16.244 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:16.244 slat (nsec): min=6699, max=43190, avg=11215.69, stdev=4816.91 00:19:16.244 clat (usec): min=128, max=3920, avg=198.77, stdev=100.70 00:19:16.244 lat (usec): min=135, max=3932, avg=209.98, stdev=101.57 00:19:16.244 clat percentiles (usec): 00:19:16.244 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:19:16.244 | 30.00th=[ 169], 40.00th=[ 188], 50.00th=[ 204], 60.00th=[ 210], 00:19:16.244 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 243], 00:19:16.244 | 99.00th=[ 310], 99.50th=[ 363], 99.90th=[ 1303], 99.95th=[ 1778], 00:19:16.244 | 99.99th=[ 3916] 00:19:16.244 bw ( KiB/s): min= 8192, max= 8192, per=34.99%, avg=8192.00, stdev= 0.00, samples=1 00:19:16.244 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:16.244 lat (usec) : 250=71.02%, 500=28.14%, 750=0.69%, 1000=0.08% 00:19:16.244 lat (msec) : 2=0.05%, 4=0.03% 00:19:16.244 cpu : usr=2.10%, sys=6.00%, ctx=3920, majf=0, minf=2 00:19:16.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 issued rwts: total=1872,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.244 job3: (groupid=0, jobs=1): err= 0: pid=251767: Thu Jul 11 11:05:30 2024 00:19:16.244 read: IOPS=77, BW=310KiB/s (317kB/s)(316KiB/1020msec) 00:19:16.244 slat (nsec): min=7373, max=34267, avg=11938.97, stdev=5978.02 00:19:16.244 clat (usec): min=203, max=42067, avg=11410.03, stdev=18248.80 00:19:16.244 lat (usec): min=212, max=42084, avg=11421.97, stdev=18252.89 00:19:16.244 clat percentiles (usec): 00:19:16.244 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 231], 00:19:16.244 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 260], 00:19:16.244 | 70.00th=[ 437], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:16.244 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:16.244 | 99.99th=[42206] 00:19:16.244 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:19:16.244 slat (nsec): min=10350, max=51867, avg=21577.17, stdev=4428.96 00:19:16.244 clat (usec): min=165, max=369, avg=202.03, stdev=14.33 00:19:16.244 lat (usec): min=177, max=391, avg=223.61, stdev=15.57 00:19:16.244 clat percentiles (usec): 00:19:16.244 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:19:16.244 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:19:16.244 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 223], 00:19:16.244 | 99.00th=[ 237], 99.50th=[ 262], 99.90th=[ 371], 99.95th=[ 371], 00:19:16.244 | 99.99th=[ 371] 00:19:16.244 bw ( KiB/s): min= 4096, max= 4096, per=17.49%, avg=4096.00, stdev= 0.00, samples=1 00:19:16.244 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:16.244 lat (usec) : 250=92.72%, 500=3.55% 00:19:16.244 lat (msec) : 20=0.17%, 50=3.55% 00:19:16.244 cpu : usr=0.98%, sys=1.28%, ctx=591, majf=0, minf=1 00:19:16.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.244 issued rwts: total=79,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.244 00:19:16.244 Run status group 0 (all jobs): 00:19:16.244 READ: bw=17.4MiB/s (18.3MB/s), 310KiB/s-7481KiB/s (317kB/s-7660kB/s), io=17.9MiB (18.7MB), run=1001-1024msec 00:19:16.244 WRITE: bw=22.9MiB/s (24.0MB/s), 2008KiB/s-8184KiB/s (2056kB/s-8380kB/s), io=23.4MiB (24.6MB), run=1001-1024msec 00:19:16.244 00:19:16.244 Disk stats (read/write): 00:19:16.244 nvme0n1: ios=1346/1536, merge=0/0, ticks=1457/303, in_queue=1760, util=96.79% 00:19:16.244 nvme0n2: ios=1073/1536, merge=0/0, ticks=818/309, in_queue=1127, util=96.74% 00:19:16.244 nvme0n3: ios=1536/1693, merge=0/0, ticks=435/333, in_queue=768, util=88.77% 00:19:16.244 nvme0n4: ios=74/512, merge=0/0, ticks=697/96, in_queue=793, util=89.52% 00:19:16.244 11:05:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:16.244 [global] 00:19:16.244 thread=1 00:19:16.244 invalidate=1 00:19:16.244 rw=write 00:19:16.244 time_based=1 00:19:16.244 runtime=1 00:19:16.244 ioengine=libaio 00:19:16.244 direct=1 00:19:16.244 bs=4096 00:19:16.244 iodepth=128 00:19:16.244 norandommap=0 00:19:16.244 numjobs=1 00:19:16.244 00:19:16.244 verify_dump=1 00:19:16.244 verify_backlog=512 00:19:16.244 verify_state_save=0 00:19:16.244 do_verify=1 00:19:16.244 verify=crc32c-intel 00:19:16.244 [job0] 00:19:16.244 filename=/dev/nvme0n1 00:19:16.244 [job1] 00:19:16.244 filename=/dev/nvme0n2 00:19:16.244 [job2] 00:19:16.244 filename=/dev/nvme0n3 00:19:16.244 [job3] 00:19:16.244 filename=/dev/nvme0n4 00:19:16.244 Could not set queue depth (nvme0n1) 00:19:16.244 Could not set queue depth (nvme0n2) 00:19:16.244 Could not set queue depth (nvme0n3) 00:19:16.244 Could not set queue depth (nvme0n4) 00:19:16.502 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.502 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.502 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.502 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.502 fio-3.35 00:19:16.502 Starting 4 threads 00:19:17.875 00:19:17.875 job0: (groupid=0, jobs=1): err= 0: pid=251997: Thu Jul 11 11:05:31 2024 00:19:17.875 read: IOPS=5695, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec) 00:19:17.875 slat (usec): min=2, max=13427, avg=82.33, stdev=456.88 00:19:17.875 clat (usec): min=572, max=23729, avg=10801.87, stdev=1850.18 00:19:17.875 lat (usec): min=3550, max=23743, avg=10884.20, stdev=1871.16 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10159], 00:19:17.875 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:19:17.875 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 00:19:17.875 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[23200], 00:19:17.875 | 99.99th=[23725] 00:19:17.875 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:19:17.875 slat (usec): min=4, max=5420, avg=78.12, stdev=371.45 00:19:17.875 clat (usec): min=6754, max=15522, avg=10600.60, stdev=1011.49 00:19:17.875 lat (usec): min=6763, max=15535, avg=10678.72, stdev=1040.75 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10028], 00:19:17.875 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:19:17.875 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[12125], 00:19:17.875 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15008], 99.95th=[15139], 00:19:17.875 | 99.99th=[15533] 00:19:17.875 bw ( KiB/s): min=24200, max=24576, per=35.80%, avg=24388.00, stdev=265.87, samples=2 00:19:17.875 iops : min= 6050, max= 6144, avg=6097.00, stdev=66.47, samples=2 00:19:17.875 lat (usec) : 750=0.01% 00:19:17.875 lat (msec) : 4=0.35%, 10=15.38%, 20=83.18%, 50=1.07% 00:19:17.875 cpu : usr=6.19%, sys=10.48%, ctx=586, majf=0, minf=1 00:19:17.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:17.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.875 issued rwts: total=5713,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.875 job1: (groupid=0, jobs=1): err= 0: pid=251998: Thu Jul 11 11:05:31 2024 00:19:17.875 read: IOPS=3321, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec) 00:19:17.875 slat (usec): min=2, max=8752, avg=107.89, stdev=608.15 00:19:17.875 clat (usec): min=740, max=35545, avg=12816.14, stdev=3384.85 00:19:17.875 lat (usec): min=2413, max=35552, avg=12924.03, stdev=3434.24 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 5604], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10814], 00:19:17.875 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12387], 60.00th=[12911], 00:19:17.875 | 70.00th=[14222], 80.00th=[15533], 90.00th=[16188], 95.00th=[16909], 00:19:17.875 | 99.00th=[25035], 99.50th=[26346], 99.90th=[35390], 99.95th=[35390], 00:19:17.875 | 99.99th=[35390] 00:19:17.875 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:19:17.875 slat (usec): min=3, max=27064, avg=173.40, stdev=1055.35 00:19:17.875 clat (usec): min=5600, max=70882, avg=23042.69, stdev=16009.17 00:19:17.875 lat (usec): min=5613, max=70906, avg=23216.09, stdev=16123.32 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 8455], 5.00th=[10290], 10.00th=[10683], 20.00th=[10814], 00:19:17.875 | 30.00th=[11207], 40.00th=[11338], 50.00th=[13304], 60.00th=[22938], 00:19:17.875 | 70.00th=[26608], 80.00th=[38011], 90.00th=[46924], 95.00th=[58459], 00:19:17.875 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:19:17.875 | 99.99th=[70779] 00:19:17.875 bw ( KiB/s): min=10552, max=18120, per=21.05%, avg=14336.00, stdev=5351.38, samples=2 00:19:17.875 iops : min= 2638, max= 4530, avg=3584.00, stdev=1337.85, samples=2 00:19:17.875 lat (usec) : 750=0.01% 00:19:17.875 lat (msec) : 4=0.46%, 10=6.74%, 20=69.13%, 50=19.18%, 100=4.47% 00:19:17.875 cpu : usr=2.60%, sys=3.70%, ctx=422, majf=0, minf=1 00:19:17.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:17.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.875 issued rwts: total=3328,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.875 job2: (groupid=0, jobs=1): err= 0: pid=251999: Thu Jul 11 11:05:31 2024 00:19:17.875 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:19:17.875 slat (usec): min=2, max=20936, avg=119.16, stdev=762.24 00:19:17.875 clat (usec): min=6364, max=44682, avg=15571.79, stdev=5730.60 00:19:17.875 lat (usec): min=6369, max=44689, avg=15690.94, stdev=5794.69 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 8094], 5.00th=[10421], 10.00th=[11600], 20.00th=[12125], 00:19:17.875 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13698], 00:19:17.875 | 70.00th=[15008], 80.00th=[21103], 90.00th=[24249], 95.00th=[25560], 00:19:17.875 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:19:17.875 | 99.99th=[44827] 00:19:17.875 write: IOPS=4200, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1006msec); 0 zone resets 00:19:17.875 slat (usec): min=3, max=14661, avg=114.61, stdev=588.24 00:19:17.875 clat (usec): min=5425, max=33941, avg=14594.15, stdev=5196.48 00:19:17.875 lat (usec): min=6653, max=33975, avg=14708.76, stdev=5222.10 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[11863], 00:19:17.875 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[12911], 00:19:17.875 | 70.00th=[13566], 80.00th=[14615], 90.00th=[25297], 95.00th=[25560], 00:19:17.875 | 99.00th=[29492], 99.50th=[31065], 99.90th=[32637], 99.95th=[32637], 00:19:17.875 | 99.99th=[33817] 00:19:17.875 bw ( KiB/s): min=12392, max=20505, per=24.15%, avg=16448.50, stdev=5736.76, samples=2 00:19:17.875 iops : min= 3098, max= 5126, avg=4112.00, stdev=1434.01, samples=2 00:19:17.875 lat (msec) : 10=4.87%, 20=76.18%, 50=18.95% 00:19:17.875 cpu : usr=3.08%, sys=5.77%, ctx=440, majf=0, minf=1 00:19:17.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:17.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.875 issued rwts: total=4096,4226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.875 job3: (groupid=0, jobs=1): err= 0: pid=252000: Thu Jul 11 11:05:31 2024 00:19:17.875 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:19:17.875 slat (usec): min=2, max=14493, avg=154.73, stdev=982.55 00:19:17.875 clat (usec): min=1174, max=41768, avg=19174.58, stdev=6914.94 00:19:17.875 lat (usec): min=1196, max=41782, avg=19329.30, stdev=6988.31 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 3392], 5.00th=[ 7963], 10.00th=[10945], 20.00th=[13829], 00:19:17.875 | 30.00th=[14877], 40.00th=[16712], 50.00th=[18220], 60.00th=[21627], 00:19:17.875 | 70.00th=[23725], 80.00th=[24511], 90.00th=[27919], 95.00th=[31065], 00:19:17.875 | 99.00th=[35914], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:19:17.875 | 99.99th=[41681] 00:19:17.875 write: IOPS=3171, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1007msec); 0 zone resets 00:19:17.875 slat (usec): min=3, max=13465, avg=156.92, stdev=749.75 00:19:17.875 clat (usec): min=1313, max=56101, avg=21540.02, stdev=9528.53 00:19:17.875 lat (usec): min=1322, max=56108, avg=21696.94, stdev=9600.48 00:19:17.875 clat percentiles (usec): 00:19:17.875 | 1.00th=[ 7439], 5.00th=[10290], 10.00th=[11994], 20.00th=[14353], 00:19:17.875 | 30.00th=[15008], 40.00th=[16909], 50.00th=[17695], 60.00th=[23725], 00:19:17.875 | 70.00th=[25035], 80.00th=[25822], 90.00th=[34866], 95.00th=[41157], 00:19:17.875 | 99.00th=[53216], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:19:17.875 | 99.99th=[56361] 00:19:17.875 bw ( KiB/s): min=11384, max=13208, per=18.05%, avg=12296.00, stdev=1289.76, samples=2 00:19:17.875 iops : min= 2846, max= 3302, avg=3074.00, stdev=322.44, samples=2 00:19:17.875 lat (msec) : 2=0.34%, 4=0.64%, 10=4.92%, 20=48.72%, 50=44.54% 00:19:17.875 lat (msec) : 100=0.85% 00:19:17.875 cpu : usr=2.39%, sys=4.08%, ctx=343, majf=0, minf=1 00:19:17.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:17.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.875 issued rwts: total=3072,3194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.875 00:19:17.875 Run status group 0 (all jobs): 00:19:17.875 READ: bw=62.9MiB/s (65.9MB/s), 11.9MiB/s-22.2MiB/s (12.5MB/s-23.3MB/s), io=63.3MiB (66.4MB), run=1002-1007msec 00:19:17.875 WRITE: bw=66.5MiB/s (69.7MB/s), 12.4MiB/s-23.9MiB/s (13.0MB/s-25.1MB/s), io=67.0MiB (70.2MB), run=1002-1007msec 00:19:17.875 00:19:17.875 Disk stats (read/write): 00:19:17.875 nvme0n1: ios=4912/5120, merge=0/0, ticks=16297/15838, in_queue=32135, util=85.87% 00:19:17.875 nvme0n2: ios=2594/2568, merge=0/0, ticks=14657/34136, in_queue=48793, util=100.00% 00:19:17.875 nvme0n3: ios=3643/3895, merge=0/0, ticks=20544/21141, in_queue=41685, util=92.87% 00:19:17.875 nvme0n4: ios=2617/2879, merge=0/0, ticks=27028/45145, in_queue=72173, util=94.40% 00:19:17.875 11:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:17.875 [global] 00:19:17.875 thread=1 00:19:17.875 invalidate=1 00:19:17.875 rw=randwrite 00:19:17.875 time_based=1 00:19:17.875 runtime=1 00:19:17.875 ioengine=libaio 00:19:17.875 direct=1 00:19:17.875 bs=4096 00:19:17.875 iodepth=128 00:19:17.875 norandommap=0 00:19:17.875 numjobs=1 00:19:17.875 00:19:17.876 verify_dump=1 00:19:17.876 verify_backlog=512 00:19:17.876 verify_state_save=0 00:19:17.876 do_verify=1 00:19:17.876 verify=crc32c-intel 00:19:17.876 [job0] 00:19:17.876 filename=/dev/nvme0n1 00:19:17.876 [job1] 00:19:17.876 filename=/dev/nvme0n2 00:19:17.876 [job2] 00:19:17.876 filename=/dev/nvme0n3 00:19:17.876 [job3] 00:19:17.876 filename=/dev/nvme0n4 00:19:17.876 Could not set queue depth (nvme0n1) 00:19:17.876 Could not set queue depth (nvme0n2) 00:19:17.876 Could not set queue depth (nvme0n3) 00:19:17.876 Could not set queue depth (nvme0n4) 00:19:17.876 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.876 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.876 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.876 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.876 fio-3.35 00:19:17.876 Starting 4 threads 00:19:19.248 00:19:19.248 job0: (groupid=0, jobs=1): err= 0: pid=252245: Thu Jul 11 11:05:33 2024 00:19:19.248 read: IOPS=5713, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1008msec) 00:19:19.248 slat (usec): min=2, max=9531, avg=82.84, stdev=512.36 00:19:19.248 clat (usec): min=3693, max=19955, avg=11026.75, stdev=2181.80 00:19:19.248 lat (usec): min=3697, max=20001, avg=11109.59, stdev=2212.39 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[ 4752], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:19:19.248 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10945], 00:19:19.248 | 70.00th=[11207], 80.00th=[11731], 90.00th=[13566], 95.00th=[15795], 00:19:19.248 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19792], 99.95th=[20055], 00:19:19.248 | 99.99th=[20055] 00:19:19.248 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:19:19.248 slat (usec): min=3, max=8048, avg=74.31, stdev=377.22 00:19:19.248 clat (usec): min=2085, max=19956, avg=10347.27, stdev=1747.57 00:19:19.248 lat (usec): min=2089, max=19963, avg=10421.58, stdev=1769.77 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[ 3523], 5.00th=[ 6718], 10.00th=[ 8455], 20.00th=[ 9634], 00:19:19.248 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:19:19.248 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:19:19.248 | 99.00th=[13698], 99.50th=[16712], 99.90th=[19530], 99.95th=[19530], 00:19:19.248 | 99.99th=[20055] 00:19:19.248 bw ( KiB/s): min=24568, max=24576, per=35.74%, avg=24572.00, stdev= 5.66, samples=2 00:19:19.248 iops : min= 6142, max= 6144, avg=6143.00, stdev= 1.41, samples=2 00:19:19.248 lat (msec) : 4=0.67%, 10=20.84%, 20=78.48% 00:19:19.248 cpu : usr=7.25%, sys=9.63%, ctx=592, majf=0, minf=15 00:19:19.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:19.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.248 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.248 job1: (groupid=0, jobs=1): err= 0: pid=252262: Thu Jul 11 11:05:33 2024 00:19:19.248 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:19:19.248 slat (usec): min=3, max=8975, avg=83.83, stdev=449.19 00:19:19.248 clat (usec): min=7954, max=26624, avg=11178.31, stdev=2034.82 00:19:19.248 lat (usec): min=7968, max=26635, avg=11262.14, stdev=2065.34 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:19:19.248 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:19.248 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12387], 95.00th=[14615], 00:19:19.248 | 99.00th=[20841], 99.50th=[24511], 99.90th=[26608], 99.95th=[26608], 00:19:19.248 | 99.99th=[26608] 00:19:19.248 write: IOPS=5715, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1004msec); 0 zone resets 00:19:19.248 slat (usec): min=3, max=6065, avg=81.30, stdev=416.15 00:19:19.248 clat (usec): min=2841, max=26697, avg=11137.30, stdev=2663.07 00:19:19.248 lat (usec): min=3594, max=26721, avg=11218.60, stdev=2683.80 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[10028], 00:19:19.248 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:19:19.248 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12780], 95.00th=[16909], 00:19:19.248 | 99.00th=[23725], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:19:19.248 | 99.99th=[26608] 00:19:19.248 bw ( KiB/s): min=21632, max=23424, per=32.77%, avg=22528.00, stdev=1267.14, samples=2 00:19:19.248 iops : min= 5408, max= 5856, avg=5632.00, stdev=316.78, samples=2 00:19:19.248 lat (msec) : 4=0.08%, 10=13.53%, 20=84.89%, 50=1.50% 00:19:19.248 cpu : usr=9.67%, sys=10.37%, ctx=476, majf=0, minf=11 00:19:19.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:19.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.248 issued rwts: total=5632,5738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.248 job2: (groupid=0, jobs=1): err= 0: pid=252299: Thu Jul 11 11:05:33 2024 00:19:19.248 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:19:19.248 slat (usec): min=2, max=13461, avg=145.63, stdev=887.14 00:19:19.248 clat (usec): min=7679, max=33631, avg=17849.36, stdev=3210.44 00:19:19.248 lat (usec): min=7686, max=33660, avg=17994.99, stdev=3318.11 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[ 7701], 5.00th=[12125], 10.00th=[15139], 20.00th=[16909], 00:19:19.248 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:19:19.248 | 70.00th=[18220], 80.00th=[18744], 90.00th=[21103], 95.00th=[24773], 00:19:19.248 | 99.00th=[27919], 99.50th=[29754], 99.90th=[31327], 99.95th=[31327], 00:19:19.248 | 99.99th=[33817] 00:19:19.248 write: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1012msec); 0 zone resets 00:19:19.248 slat (usec): min=3, max=9544, avg=204.07, stdev=809.15 00:19:19.248 clat (usec): min=5331, max=55987, avg=27833.59, stdev=8654.18 00:19:19.248 lat (usec): min=9598, max=55997, avg=28037.66, stdev=8702.33 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[13566], 5.00th=[14091], 10.00th=[16450], 20.00th=[17695], 00:19:19.248 | 30.00th=[23725], 40.00th=[26346], 50.00th=[28705], 60.00th=[30802], 00:19:19.248 | 70.00th=[33817], 80.00th=[35390], 90.00th=[38536], 95.00th=[40109], 00:19:19.248 | 99.00th=[47449], 99.50th=[50594], 99.90th=[55837], 99.95th=[55837], 00:19:19.248 | 99.99th=[55837] 00:19:19.248 bw ( KiB/s): min=10448, max=12136, per=16.43%, avg=11292.00, stdev=1193.60, samples=2 00:19:19.248 iops : min= 2612, max= 3034, avg=2823.00, stdev=298.40, samples=2 00:19:19.248 lat (msec) : 10=0.85%, 20=53.80%, 50=44.93%, 100=0.42% 00:19:19.248 cpu : usr=3.46%, sys=6.63%, ctx=330, majf=0, minf=15 00:19:19.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:19.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.248 issued rwts: total=2560,2951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.248 job3: (groupid=0, jobs=1): err= 0: pid=252312: Thu Jul 11 11:05:33 2024 00:19:19.248 read: IOPS=2435, BW=9743KiB/s (9976kB/s)(9840KiB/1010msec) 00:19:19.248 slat (usec): min=2, max=8383, avg=159.27, stdev=918.13 00:19:19.248 clat (usec): min=2429, max=40892, avg=19024.44, stdev=4474.73 00:19:19.248 lat (usec): min=5204, max=40900, avg=19183.71, stdev=4567.55 00:19:19.248 clat percentiles (usec): 00:19:19.248 | 1.00th=[ 8291], 5.00th=[12125], 10.00th=[12256], 20.00th=[13173], 00:19:19.248 | 30.00th=[18220], 40.00th=[19268], 50.00th=[20317], 60.00th=[20317], 00:19:19.248 | 70.00th=[20579], 80.00th=[21627], 90.00th=[23462], 95.00th=[25297], 00:19:19.248 | 99.00th=[29754], 99.50th=[32375], 99.90th=[37487], 99.95th=[37487], 00:19:19.248 | 99.99th=[40633] 00:19:19.248 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:19:19.249 slat (usec): min=3, max=11296, avg=231.45, stdev=962.43 00:19:19.249 clat (usec): min=1017, max=74701, avg=31775.68, stdev=18530.02 00:19:19.249 lat (usec): min=1027, max=74713, avg=32007.13, stdev=18657.94 00:19:19.249 clat percentiles (usec): 00:19:19.249 | 1.00th=[ 4817], 5.00th=[10945], 10.00th=[11994], 20.00th=[12649], 00:19:19.249 | 30.00th=[15270], 40.00th=[25822], 50.00th=[27395], 60.00th=[30802], 00:19:19.249 | 70.00th=[42206], 80.00th=[49021], 90.00th=[60556], 95.00th=[68682], 00:19:19.249 | 99.00th=[73925], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:19:19.249 | 99.99th=[74974] 00:19:19.249 bw ( KiB/s): min= 8136, max=12344, per=14.90%, avg=10240.00, stdev=2975.51, samples=2 00:19:19.249 iops : min= 2034, max= 3086, avg=2560.00, stdev=743.88, samples=2 00:19:19.249 lat (msec) : 2=0.14%, 4=0.30%, 10=2.99%, 20=35.18%, 50=51.61% 00:19:19.249 lat (msec) : 100=9.78% 00:19:19.249 cpu : usr=1.78%, sys=3.96%, ctx=313, majf=0, minf=9 00:19:19.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:19.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.249 issued rwts: total=2460,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.249 00:19:19.249 Run status group 0 (all jobs): 00:19:19.249 READ: bw=63.3MiB/s (66.4MB/s), 9743KiB/s-22.3MiB/s (9976kB/s-23.4MB/s), io=64.1MiB (67.2MB), run=1004-1012msec 00:19:19.249 WRITE: bw=67.1MiB/s (70.4MB/s), 9.90MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=67.9MiB (71.2MB), run=1004-1012msec 00:19:19.249 00:19:19.249 Disk stats (read/write): 00:19:19.249 nvme0n1: ios=4903/5120, merge=0/0, ticks=32018/29182, in_queue=61200, util=97.39% 00:19:19.249 nvme0n2: ios=4631/4875, merge=0/0, ticks=17466/15900, in_queue=33366, util=98.17% 00:19:19.249 nvme0n3: ios=2160/2560, merge=0/0, ticks=20009/30723, in_queue=50732, util=97.27% 00:19:19.249 nvme0n4: ios=2069/2279, merge=0/0, ticks=17674/30966, in_queue=48640, util=98.94% 00:19:19.249 11:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:19.249 11:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=252484 00:19:19.249 11:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:19.249 11:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:19.249 [global] 00:19:19.249 thread=1 00:19:19.249 invalidate=1 00:19:19.249 rw=read 00:19:19.249 time_based=1 00:19:19.249 runtime=10 00:19:19.249 ioengine=libaio 00:19:19.249 direct=1 00:19:19.249 bs=4096 00:19:19.249 iodepth=1 00:19:19.249 norandommap=1 00:19:19.249 numjobs=1 00:19:19.249 00:19:19.249 [job0] 00:19:19.249 filename=/dev/nvme0n1 00:19:19.249 [job1] 00:19:19.249 filename=/dev/nvme0n2 00:19:19.249 [job2] 00:19:19.249 filename=/dev/nvme0n3 00:19:19.249 [job3] 00:19:19.249 filename=/dev/nvme0n4 00:19:19.249 Could not set queue depth (nvme0n1) 00:19:19.249 Could not set queue depth (nvme0n2) 00:19:19.249 Could not set queue depth (nvme0n3) 00:19:19.249 Could not set queue depth (nvme0n4) 00:19:19.249 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.249 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.249 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.249 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.249 fio-3.35 00:19:19.249 Starting 4 threads 00:19:22.524 11:05:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:22.524 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=15986688, buflen=4096 00:19:22.524 fio: pid=252580, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:22.524 11:05:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:22.524 11:05:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.524 11:05:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:22.524 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8634368, buflen=4096 00:19:22.524 fio: pid=252579, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:22.781 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1658880, buflen=4096 00:19:22.781 fio: pid=252577, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:22.781 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.782 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:23.040 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=59686912, buflen=4096 00:19:23.040 fio: pid=252578, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.040 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.040 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:23.040 00:19:23.040 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=252577: Thu Jul 11 11:05:37 2024 00:19:23.040 read: IOPS=118, BW=472KiB/s (484kB/s)(1620KiB/3429msec) 00:19:23.040 slat (usec): min=6, max=11941, avg=40.78, stdev=592.14 00:19:23.040 clat (usec): min=178, max=41785, avg=8367.65, stdev=16234.45 00:19:23.040 lat (usec): min=186, max=53042, avg=8408.45, stdev=16307.91 00:19:23.040 clat percentiles (usec): 00:19:23.040 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:19:23.040 | 30.00th=[ 243], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:19:23.040 | 70.00th=[ 334], 80.00th=[ 799], 90.00th=[41157], 95.00th=[41157], 00:19:23.040 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:23.040 | 99.99th=[41681] 00:19:23.040 bw ( KiB/s): min= 96, max= 1504, per=2.31%, avg=525.17, stdev=661.27, samples=6 00:19:23.040 iops : min= 24, max= 376, avg=131.17, stdev=165.41, samples=6 00:19:23.040 lat (usec) : 250=32.76%, 500=46.31%, 750=0.25%, 1000=0.49% 00:19:23.040 lat (msec) : 50=19.95% 00:19:23.040 cpu : usr=0.00%, sys=0.29%, ctx=407, majf=0, minf=1 00:19:23.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 issued rwts: total=406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.040 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=252578: Thu Jul 11 11:05:37 2024 00:19:23.040 read: IOPS=3950, BW=15.4MiB/s (16.2MB/s)(56.9MiB/3689msec) 00:19:23.040 slat (usec): min=5, max=25580, avg=16.88, stdev=344.51 00:19:23.040 clat (usec): min=169, max=13537, avg=232.00, stdev=113.31 00:19:23.040 lat (usec): min=175, max=25854, avg=248.89, stdev=363.95 00:19:23.040 clat percentiles (usec): 00:19:23.040 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:19:23.040 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:19:23.040 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:19:23.040 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 383], 99.95th=[ 396], 00:19:23.040 | 99.99th=[ 906] 00:19:23.040 bw ( KiB/s): min=14559, max=18088, per=69.76%, avg=15875.43, stdev=1327.02, samples=7 00:19:23.040 iops : min= 3639, max= 4522, avg=3968.71, stdev=331.90, samples=7 00:19:23.040 lat (usec) : 250=76.20%, 500=23.76%, 750=0.01%, 1000=0.01% 00:19:23.040 lat (msec) : 20=0.01% 00:19:23.040 cpu : usr=2.39%, sys=6.59%, ctx=14582, majf=0, minf=1 00:19:23.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 issued rwts: total=14573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.040 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=252579: Thu Jul 11 11:05:37 2024 00:19:23.040 read: IOPS=664, BW=2657KiB/s (2720kB/s)(8432KiB/3174msec) 00:19:23.040 slat (nsec): min=4103, max=35089, avg=6939.28, stdev=4168.35 00:19:23.040 clat (usec): min=181, max=42044, avg=1486.21, stdev=7035.11 00:19:23.040 lat (usec): min=186, max=42058, avg=1493.15, stdev=7036.91 00:19:23.040 clat percentiles (usec): 00:19:23.040 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:19:23.040 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:19:23.040 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 310], 95.00th=[ 388], 00:19:23.040 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:19:23.040 | 99.99th=[42206] 00:19:23.040 bw ( KiB/s): min= 112, max=13680, per=10.64%, avg=2422.50, stdev=5515.39, samples=6 00:19:23.040 iops : min= 28, max= 3420, avg=605.50, stdev=1378.91, samples=6 00:19:23.040 lat (usec) : 250=82.12%, 500=14.27%, 750=0.28%, 1000=0.14% 00:19:23.040 lat (msec) : 2=0.05%, 50=3.08% 00:19:23.040 cpu : usr=0.25%, sys=0.47%, ctx=2114, majf=0, minf=1 00:19:23.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 issued rwts: total=2109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.040 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=252580: Thu Jul 11 11:05:37 2024 00:19:23.040 read: IOPS=1342, BW=5367KiB/s (5496kB/s)(15.2MiB/2909msec) 00:19:23.040 slat (nsec): min=5473, max=64019, avg=11150.71, stdev=6400.45 00:19:23.040 clat (usec): min=176, max=41243, avg=725.63, stdev=4334.11 00:19:23.040 lat (usec): min=182, max=41258, avg=736.78, stdev=4334.83 00:19:23.040 clat percentiles (usec): 00:19:23.040 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:19:23.040 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:19:23.040 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 404], 95.00th=[ 465], 00:19:23.040 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:23.040 | 99.99th=[41157] 00:19:23.040 bw ( KiB/s): min= 96, max=12520, per=15.94%, avg=3627.00, stdev=5429.55, samples=5 00:19:23.040 iops : min= 24, max= 3130, avg=906.60, stdev=1357.51, samples=5 00:19:23.040 lat (usec) : 250=66.09%, 500=30.38%, 750=2.36% 00:19:23.040 lat (msec) : 50=1.15% 00:19:23.040 cpu : usr=1.03%, sys=2.20%, ctx=3904, majf=0, minf=1 00:19:23.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.040 issued rwts: total=3904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.040 00:19:23.040 Run status group 0 (all jobs): 00:19:23.040 READ: bw=22.2MiB/s (23.3MB/s), 472KiB/s-15.4MiB/s (484kB/s-16.2MB/s), io=82.0MiB (86.0MB), run=2909-3689msec 00:19:23.040 00:19:23.040 Disk stats (read/write): 00:19:23.040 nvme0n1: ios=403/0, merge=0/0, ticks=3309/0, in_queue=3309, util=95.59% 00:19:23.041 nvme0n2: ios=14287/0, merge=0/0, ticks=3363/0, in_queue=3363, util=97.45% 00:19:23.041 nvme0n3: ios=2155/0, merge=0/0, ticks=4092/0, in_queue=4092, util=100.00% 00:19:23.041 nvme0n4: ios=3750/0, merge=0/0, ticks=2750/0, in_queue=2750, util=96.71% 00:19:23.299 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.299 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:23.557 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.557 11:05:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:23.815 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.815 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:24.073 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:24.073 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:24.331 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:24.331 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 252484 00:19:24.331 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:24.331 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:24.589 nvmf hotplug test: fio failed as expected 00:19:24.589 11:05:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.847 rmmod nvme_tcp 00:19:24.847 rmmod nvme_fabrics 00:19:24.847 rmmod nvme_keyring 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 250468 ']' 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 250468 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 250468 ']' 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 250468 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 250468 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 250468' 00:19:24.847 killing process with pid 250468 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 250468 00:19:24.847 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 250468 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.106 11:05:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.013 11:05:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.271 00:19:27.271 real 0m23.499s 00:19:27.271 user 1m21.354s 00:19:27.271 sys 0m7.463s 00:19:27.271 11:05:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.271 11:05:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 ************************************ 00:19:27.271 END TEST nvmf_fio_target 00:19:27.271 ************************************ 00:19:27.271 11:05:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.271 11:05:41 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.271 11:05:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.271 11:05:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.271 11:05:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.271 ************************************ 00:19:27.271 START TEST nvmf_bdevio 00:19:27.272 ************************************ 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.272 * Looking for test storage... 00:19:27.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.272 11:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:19:29.806 00:19:29.806 --- 10.0.0.2 ping statistics --- 00:19:29.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.806 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:19:29.806 00:19:29.806 --- 10.0.0.1 ping statistics --- 00:19:29.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.806 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=255194 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 255194 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 255194 ']' 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.806 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.807 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.807 11:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 [2024-07-11 11:05:43.830173] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:19:29.807 [2024-07-11 11:05:43.830256] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.807 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.807 [2024-07-11 11:05:43.894597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.807 [2024-07-11 11:05:43.984315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.807 [2024-07-11 11:05:43.984375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.807 [2024-07-11 11:05:43.984388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.807 [2024-07-11 11:05:43.984400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.807 [2024-07-11 11:05:43.984409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.807 [2024-07-11 11:05:43.984495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:29.807 [2024-07-11 11:05:43.984556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:29.807 [2024-07-11 11:05:43.984622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:29.807 [2024-07-11 11:05:43.984624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 [2024-07-11 11:05:44.135486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 Malloc0 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.807 [2024-07-11 11:05:44.188485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:29.807 { 00:19:29.807 "params": { 00:19:29.807 "name": "Nvme$subsystem", 00:19:29.807 "trtype": "$TEST_TRANSPORT", 00:19:29.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.807 "adrfam": "ipv4", 00:19:29.807 "trsvcid": "$NVMF_PORT", 00:19:29.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.807 "hdgst": ${hdgst:-false}, 00:19:29.807 "ddgst": ${ddgst:-false} 00:19:29.807 }, 00:19:29.807 "method": "bdev_nvme_attach_controller" 00:19:29.807 } 00:19:29.807 EOF 00:19:29.807 )") 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:29.807 11:05:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:29.807 "params": { 00:19:29.807 "name": "Nvme1", 00:19:29.807 "trtype": "tcp", 00:19:29.807 "traddr": "10.0.0.2", 00:19:29.807 "adrfam": "ipv4", 00:19:29.807 "trsvcid": "4420", 00:19:29.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.807 "hdgst": false, 00:19:29.807 "ddgst": false 00:19:29.807 }, 00:19:29.807 "method": "bdev_nvme_attach_controller" 00:19:29.807 }' 00:19:30.065 [2024-07-11 11:05:44.235554] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:19:30.065 [2024-07-11 11:05:44.235635] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255223 ] 00:19:30.065 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.065 [2024-07-11 11:05:44.295553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:30.065 [2024-07-11 11:05:44.384018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.065 [2024-07-11 11:05:44.384071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.065 [2024-07-11 11:05:44.384074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.323 I/O targets: 00:19:30.323 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:30.323 00:19:30.323 00:19:30.323 CUnit - A unit testing framework for C - Version 2.1-3 00:19:30.323 http://cunit.sourceforge.net/ 00:19:30.323 00:19:30.323 00:19:30.323 Suite: bdevio tests on: Nvme1n1 00:19:30.323 Test: blockdev write read block ...passed 00:19:30.580 Test: blockdev write zeroes read block ...passed 00:19:30.580 Test: blockdev write zeroes read no split ...passed 00:19:30.580 Test: blockdev write zeroes read split ...passed 00:19:30.581 Test: blockdev write zeroes read split partial ...passed 00:19:30.581 Test: blockdev reset ...[2024-07-11 11:05:44.795239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.581 [2024-07-11 11:05:44.795350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90ea60 (9): Bad file descriptor 00:19:30.581 [2024-07-11 11:05:44.810318] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.581 passed 00:19:30.581 Test: blockdev write read 8 blocks ...passed 00:19:30.581 Test: blockdev write read size > 128k ...passed 00:19:30.581 Test: blockdev write read invalid size ...passed 00:19:30.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:30.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:30.581 Test: blockdev write read max offset ...passed 00:19:30.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:30.839 Test: blockdev writev readv 8 blocks ...passed 00:19:30.839 Test: blockdev writev readv 30 x 1block ...passed 00:19:30.839 Test: blockdev writev readv block ...passed 00:19:30.839 Test: blockdev writev readv size > 128k ...passed 00:19:30.839 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:30.839 Test: blockdev comparev and writev ...[2024-07-11 11:05:45.065927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.065965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.065989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.066007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.066336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.066360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.066381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.066398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.066731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.066762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.066787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.066804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.067150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.067173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.067194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.839 [2024-07-11 11:05:45.067210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:30.839 passed 00:19:30.839 Test: blockdev nvme passthru rw ...passed 00:19:30.839 Test: blockdev nvme passthru vendor specific ...[2024-07-11 11:05:45.150010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.839 [2024-07-11 11:05:45.150040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.150176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.839 [2024-07-11 11:05:45.150204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.150340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.839 [2024-07-11 11:05:45.150363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:30.839 [2024-07-11 11:05:45.150497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.839 [2024-07-11 11:05:45.150519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:30.839 passed 00:19:30.839 Test: blockdev nvme admin passthru ...passed 00:19:30.839 Test: blockdev copy ...passed 00:19:30.839 00:19:30.839 Run Summary: Type Total Ran Passed Failed Inactive 00:19:30.839 suites 1 1 n/a 0 0 00:19:30.839 tests 23 23 23 0 0 00:19:30.839 asserts 152 152 152 0 n/a 00:19:30.839 00:19:30.839 Elapsed time = 1.055 seconds 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.097 rmmod nvme_tcp 00:19:31.097 rmmod nvme_fabrics 00:19:31.097 rmmod nvme_keyring 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 255194 ']' 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 255194 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 255194 ']' 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 255194 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 255194 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 255194' 00:19:31.097 killing process with pid 255194 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 255194 00:19:31.097 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 255194 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.355 11:05:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.891 11:05:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.891 00:19:33.891 real 0m6.318s 00:19:33.891 user 0m10.124s 00:19:33.891 sys 0m2.076s 00:19:33.891 11:05:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:33.891 11:05:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:33.891 ************************************ 00:19:33.891 END TEST nvmf_bdevio 00:19:33.891 ************************************ 00:19:33.891 11:05:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:33.891 11:05:47 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.891 11:05:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:33.891 11:05:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.891 11:05:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:33.891 ************************************ 00:19:33.891 START TEST nvmf_auth_target 00:19:33.891 ************************************ 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.891 * Looking for test storage... 00:19:33.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.891 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.892 11:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.796 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.796 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.796 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.796 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.797 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:19:35.797 00:19:35.797 --- 10.0.0.2 ping statistics --- 00:19:35.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.797 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:19:35.797 00:19:35.797 --- 10.0.0.1 ping statistics --- 00:19:35.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.797 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=257293 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 257293 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 257293 ']' 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.797 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=257427 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae78115e59d3a7cf6f1a90b1cc8c77596fe680b6eb5d4ea8 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.L73 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae78115e59d3a7cf6f1a90b1cc8c77596fe680b6eb5d4ea8 0 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae78115e59d3a7cf6f1a90b1cc8c77596fe680b6eb5d4ea8 0 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae78115e59d3a7cf6f1a90b1cc8c77596fe680b6eb5d4ea8 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.L73 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.L73 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.L73 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba0d838eed25094768c8f5c0f4ae34854bf691b9750f7f3ce8de8a9683c2ae9e 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.B4L 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba0d838eed25094768c8f5c0f4ae34854bf691b9750f7f3ce8de8a9683c2ae9e 3 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba0d838eed25094768c8f5c0f4ae34854bf691b9750f7f3ce8de8a9683c2ae9e 3 00:19:36.368 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba0d838eed25094768c8f5c0f4ae34854bf691b9750f7f3ce8de8a9683c2ae9e 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.B4L 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.B4L 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.B4L 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=424127222fc961f146fd46e27356a953 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zq6 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 424127222fc961f146fd46e27356a953 1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 424127222fc961f146fd46e27356a953 1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=424127222fc961f146fd46e27356a953 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zq6 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zq6 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.zq6 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a65360bffdcf2b7d2b9eb7434decdacd6f6cbdde9be0ffc4 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9NC 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a65360bffdcf2b7d2b9eb7434decdacd6f6cbdde9be0ffc4 2 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a65360bffdcf2b7d2b9eb7434decdacd6f6cbdde9be0ffc4 2 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a65360bffdcf2b7d2b9eb7434decdacd6f6cbdde9be0ffc4 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9NC 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9NC 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.9NC 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=56c1ff530dcfede399fb9742c7445bd1c26f9eace7fc5947 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7sx 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 56c1ff530dcfede399fb9742c7445bd1c26f9eace7fc5947 2 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 56c1ff530dcfede399fb9742c7445bd1c26f9eace7fc5947 2 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=56c1ff530dcfede399fb9742c7445bd1c26f9eace7fc5947 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7sx 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7sx 00:19:36.369 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.7sx 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b521ad032027947c2fee28b4d24ad52a 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t9n 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b521ad032027947c2fee28b4d24ad52a 1 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b521ad032027947c2fee28b4d24ad52a 1 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b521ad032027947c2fee28b4d24ad52a 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t9n 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t9n 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.t9n 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2d96075c2767678a85c62bfe73410c4fffa588583895459d323f41587c37a335 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PQs 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2d96075c2767678a85c62bfe73410c4fffa588583895459d323f41587c37a335 3 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2d96075c2767678a85c62bfe73410c4fffa588583895459d323f41587c37a335 3 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2d96075c2767678a85c62bfe73410c4fffa588583895459d323f41587c37a335 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PQs 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PQs 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.PQs 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 257293 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 257293 ']' 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.628 11:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 257427 /var/tmp/host.sock 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 257427 ']' 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:36.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.886 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.L73 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.L73 00:19:37.143 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.L73 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.B4L ]] 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.B4L 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.B4L 00:19:37.400 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.B4L 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.zq6 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.zq6 00:19:37.658 11:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.zq6 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.9NC ]] 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9NC 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9NC 00:19:37.915 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9NC 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7sx 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.7sx 00:19:38.174 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.7sx 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.t9n ]] 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t9n 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t9n 00:19:38.432 11:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t9n 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PQs 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PQs 00:19:38.690 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PQs 00:19:38.947 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:38.947 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:38.947 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.947 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.947 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.947 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.204 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.767 00:19:39.767 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.767 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.767 11:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.767 { 00:19:39.767 "cntlid": 1, 00:19:39.767 "qid": 0, 00:19:39.767 "state": "enabled", 00:19:39.767 "thread": "nvmf_tgt_poll_group_000", 00:19:39.767 "listen_address": { 00:19:39.767 "trtype": "TCP", 00:19:39.767 "adrfam": "IPv4", 00:19:39.767 "traddr": "10.0.0.2", 00:19:39.767 "trsvcid": "4420" 00:19:39.767 }, 00:19:39.767 "peer_address": { 00:19:39.767 "trtype": "TCP", 00:19:39.767 "adrfam": "IPv4", 00:19:39.767 "traddr": "10.0.0.1", 00:19:39.767 "trsvcid": "36698" 00:19:39.767 }, 00:19:39.767 "auth": { 00:19:39.767 "state": "completed", 00:19:39.767 "digest": "sha256", 00:19:39.767 "dhgroup": "null" 00:19:39.767 } 00:19:39.767 } 00:19:39.767 ]' 00:19:39.767 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.024 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.282 11:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.539 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.540 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.540 11:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.798 { 00:19:45.798 "cntlid": 3, 00:19:45.798 "qid": 0, 00:19:45.798 "state": "enabled", 00:19:45.798 "thread": "nvmf_tgt_poll_group_000", 00:19:45.798 "listen_address": { 00:19:45.798 "trtype": "TCP", 00:19:45.798 "adrfam": "IPv4", 00:19:45.798 "traddr": "10.0.0.2", 00:19:45.798 "trsvcid": "4420" 00:19:45.798 }, 00:19:45.798 "peer_address": { 00:19:45.798 "trtype": "TCP", 00:19:45.798 "adrfam": "IPv4", 00:19:45.798 "traddr": "10.0.0.1", 00:19:45.798 "trsvcid": "36732" 00:19:45.798 }, 00:19:45.798 "auth": { 00:19:45.798 "state": "completed", 00:19:45.798 "digest": "sha256", 00:19:45.798 "dhgroup": "null" 00:19:45.798 } 00:19:45.798 } 00:19:45.798 ]' 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.798 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.056 11:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.987 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.244 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.502 00:19:47.502 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.502 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.502 11:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.760 { 00:19:47.760 "cntlid": 5, 00:19:47.760 "qid": 0, 00:19:47.760 "state": "enabled", 00:19:47.760 "thread": "nvmf_tgt_poll_group_000", 00:19:47.760 "listen_address": { 00:19:47.760 "trtype": "TCP", 00:19:47.760 "adrfam": "IPv4", 00:19:47.760 "traddr": "10.0.0.2", 00:19:47.760 "trsvcid": "4420" 00:19:47.760 }, 00:19:47.760 "peer_address": { 00:19:47.760 "trtype": "TCP", 00:19:47.760 "adrfam": "IPv4", 00:19:47.760 "traddr": "10.0.0.1", 00:19:47.760 "trsvcid": "45450" 00:19:47.760 }, 00:19:47.760 "auth": { 00:19:47.760 "state": "completed", 00:19:47.760 "digest": "sha256", 00:19:47.760 "dhgroup": "null" 00:19:47.760 } 00:19:47.760 } 00:19:47.760 ]' 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.760 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.018 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:48.018 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.018 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.018 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.018 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.276 11:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.209 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.466 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.722 00:19:49.722 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.722 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.722 11:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.979 { 00:19:49.979 "cntlid": 7, 00:19:49.979 "qid": 0, 00:19:49.979 "state": "enabled", 00:19:49.979 "thread": "nvmf_tgt_poll_group_000", 00:19:49.979 "listen_address": { 00:19:49.979 "trtype": "TCP", 00:19:49.979 "adrfam": "IPv4", 00:19:49.979 "traddr": "10.0.0.2", 00:19:49.979 "trsvcid": "4420" 00:19:49.979 }, 00:19:49.979 "peer_address": { 00:19:49.979 "trtype": "TCP", 00:19:49.979 "adrfam": "IPv4", 00:19:49.979 "traddr": "10.0.0.1", 00:19:49.979 "trsvcid": "45476" 00:19:49.979 }, 00:19:49.979 "auth": { 00:19:49.979 "state": "completed", 00:19:49.979 "digest": "sha256", 00:19:49.979 "dhgroup": "null" 00:19:49.979 } 00:19:49.979 } 00:19:49.979 ]' 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.979 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.237 11:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.170 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.427 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.428 11:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.685 00:19:51.685 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.685 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.685 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.942 { 00:19:51.942 "cntlid": 9, 00:19:51.942 "qid": 0, 00:19:51.942 "state": "enabled", 00:19:51.942 "thread": "nvmf_tgt_poll_group_000", 00:19:51.942 "listen_address": { 00:19:51.942 "trtype": "TCP", 00:19:51.942 "adrfam": "IPv4", 00:19:51.942 "traddr": "10.0.0.2", 00:19:51.942 "trsvcid": "4420" 00:19:51.942 }, 00:19:51.942 "peer_address": { 00:19:51.942 "trtype": "TCP", 00:19:51.942 "adrfam": "IPv4", 00:19:51.942 "traddr": "10.0.0.1", 00:19:51.942 "trsvcid": "45506" 00:19:51.942 }, 00:19:51.942 "auth": { 00:19:51.942 "state": "completed", 00:19:51.942 "digest": "sha256", 00:19:51.942 "dhgroup": "ffdhe2048" 00:19:51.942 } 00:19:51.942 } 00:19:51.942 ]' 00:19:51.942 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.200 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.458 11:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.394 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.652 11:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.909 00:19:53.909 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.909 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.909 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.167 { 00:19:54.167 "cntlid": 11, 00:19:54.167 "qid": 0, 00:19:54.167 "state": "enabled", 00:19:54.167 "thread": "nvmf_tgt_poll_group_000", 00:19:54.167 "listen_address": { 00:19:54.167 "trtype": "TCP", 00:19:54.167 "adrfam": "IPv4", 00:19:54.167 "traddr": "10.0.0.2", 00:19:54.167 "trsvcid": "4420" 00:19:54.167 }, 00:19:54.167 "peer_address": { 00:19:54.167 "trtype": "TCP", 00:19:54.167 "adrfam": "IPv4", 00:19:54.167 "traddr": "10.0.0.1", 00:19:54.167 "trsvcid": "45542" 00:19:54.167 }, 00:19:54.167 "auth": { 00:19:54.167 "state": "completed", 00:19:54.167 "digest": "sha256", 00:19:54.167 "dhgroup": "ffdhe2048" 00:19:54.167 } 00:19:54.167 } 00:19:54.167 ]' 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.167 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.424 11:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.358 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.615 11:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.873 00:19:55.873 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.873 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.873 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.139 { 00:19:56.139 "cntlid": 13, 00:19:56.139 "qid": 0, 00:19:56.139 "state": "enabled", 00:19:56.139 "thread": "nvmf_tgt_poll_group_000", 00:19:56.139 "listen_address": { 00:19:56.139 "trtype": "TCP", 00:19:56.139 "adrfam": "IPv4", 00:19:56.139 "traddr": "10.0.0.2", 00:19:56.139 "trsvcid": "4420" 00:19:56.139 }, 00:19:56.139 "peer_address": { 00:19:56.139 "trtype": "TCP", 00:19:56.139 "adrfam": "IPv4", 00:19:56.139 "traddr": "10.0.0.1", 00:19:56.139 "trsvcid": "45574" 00:19:56.139 }, 00:19:56.139 "auth": { 00:19:56.139 "state": "completed", 00:19:56.139 "digest": "sha256", 00:19:56.139 "dhgroup": "ffdhe2048" 00:19:56.139 } 00:19:56.139 } 00:19:56.139 ]' 00:19:56.139 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.140 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.140 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.423 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.423 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.423 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.423 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.423 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.696 11:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.324 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.582 11:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.146 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.146 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.146 { 00:19:58.146 "cntlid": 15, 00:19:58.146 "qid": 0, 00:19:58.147 "state": "enabled", 00:19:58.147 "thread": "nvmf_tgt_poll_group_000", 00:19:58.147 "listen_address": { 00:19:58.147 "trtype": "TCP", 00:19:58.147 "adrfam": "IPv4", 00:19:58.147 "traddr": "10.0.0.2", 00:19:58.147 "trsvcid": "4420" 00:19:58.147 }, 00:19:58.147 "peer_address": { 00:19:58.147 "trtype": "TCP", 00:19:58.147 "adrfam": "IPv4", 00:19:58.147 "traddr": "10.0.0.1", 00:19:58.147 "trsvcid": "48992" 00:19:58.147 }, 00:19:58.147 "auth": { 00:19:58.147 "state": "completed", 00:19:58.147 "digest": "sha256", 00:19:58.147 "dhgroup": "ffdhe2048" 00:19:58.147 } 00:19:58.147 } 00:19:58.147 ]' 00:19:58.147 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.403 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.403 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.403 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.403 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.403 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.403 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.404 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.663 11:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.596 11:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.596 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.854 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.854 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.854 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.111 00:20:00.111 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.111 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.111 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.369 { 00:20:00.369 "cntlid": 17, 00:20:00.369 "qid": 0, 00:20:00.369 "state": "enabled", 00:20:00.369 "thread": "nvmf_tgt_poll_group_000", 00:20:00.369 "listen_address": { 00:20:00.369 "trtype": "TCP", 00:20:00.369 "adrfam": "IPv4", 00:20:00.369 "traddr": "10.0.0.2", 00:20:00.369 "trsvcid": "4420" 00:20:00.369 }, 00:20:00.369 "peer_address": { 00:20:00.369 "trtype": "TCP", 00:20:00.369 "adrfam": "IPv4", 00:20:00.369 "traddr": "10.0.0.1", 00:20:00.369 "trsvcid": "49012" 00:20:00.369 }, 00:20:00.369 "auth": { 00:20:00.369 "state": "completed", 00:20:00.369 "digest": "sha256", 00:20:00.369 "dhgroup": "ffdhe3072" 00:20:00.369 } 00:20:00.369 } 00:20:00.369 ]' 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.369 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.627 11:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.559 11:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.817 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.075 00:20:02.075 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.075 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.075 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.332 { 00:20:02.332 "cntlid": 19, 00:20:02.332 "qid": 0, 00:20:02.332 "state": "enabled", 00:20:02.332 "thread": "nvmf_tgt_poll_group_000", 00:20:02.332 "listen_address": { 00:20:02.332 "trtype": "TCP", 00:20:02.332 "adrfam": "IPv4", 00:20:02.332 "traddr": "10.0.0.2", 00:20:02.332 "trsvcid": "4420" 00:20:02.332 }, 00:20:02.332 "peer_address": { 00:20:02.332 "trtype": "TCP", 00:20:02.332 "adrfam": "IPv4", 00:20:02.332 "traddr": "10.0.0.1", 00:20:02.332 "trsvcid": "49038" 00:20:02.332 }, 00:20:02.332 "auth": { 00:20:02.332 "state": "completed", 00:20:02.332 "digest": "sha256", 00:20:02.332 "dhgroup": "ffdhe3072" 00:20:02.332 } 00:20:02.332 } 00:20:02.332 ]' 00:20:02.332 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.590 11:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.848 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.784 11:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.784 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.349 00:20:04.349 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.349 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.349 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.607 { 00:20:04.607 "cntlid": 21, 00:20:04.607 "qid": 0, 00:20:04.607 "state": "enabled", 00:20:04.607 "thread": "nvmf_tgt_poll_group_000", 00:20:04.607 "listen_address": { 00:20:04.607 "trtype": "TCP", 00:20:04.607 "adrfam": "IPv4", 00:20:04.607 "traddr": "10.0.0.2", 00:20:04.607 "trsvcid": "4420" 00:20:04.607 }, 00:20:04.607 "peer_address": { 00:20:04.607 "trtype": "TCP", 00:20:04.607 "adrfam": "IPv4", 00:20:04.607 "traddr": "10.0.0.1", 00:20:04.607 "trsvcid": "49076" 00:20:04.607 }, 00:20:04.607 "auth": { 00:20:04.607 "state": "completed", 00:20:04.607 "digest": "sha256", 00:20:04.607 "dhgroup": "ffdhe3072" 00:20:04.607 } 00:20:04.607 } 00:20:04.607 ]' 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.607 11:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.865 11:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.800 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.059 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.317 00:20:06.317 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.317 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.317 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.575 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.575 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.575 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.575 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.575 11:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.575 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.575 { 00:20:06.575 "cntlid": 23, 00:20:06.575 "qid": 0, 00:20:06.575 "state": "enabled", 00:20:06.575 "thread": "nvmf_tgt_poll_group_000", 00:20:06.575 "listen_address": { 00:20:06.575 "trtype": "TCP", 00:20:06.575 "adrfam": "IPv4", 00:20:06.575 "traddr": "10.0.0.2", 00:20:06.575 "trsvcid": "4420" 00:20:06.575 }, 00:20:06.575 "peer_address": { 00:20:06.575 "trtype": "TCP", 00:20:06.576 "adrfam": "IPv4", 00:20:06.576 "traddr": "10.0.0.1", 00:20:06.576 "trsvcid": "49102" 00:20:06.576 }, 00:20:06.576 "auth": { 00:20:06.576 "state": "completed", 00:20:06.576 "digest": "sha256", 00:20:06.576 "dhgroup": "ffdhe3072" 00:20:06.576 } 00:20:06.576 } 00:20:06.576 ]' 00:20:06.576 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.576 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.576 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.576 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.576 11:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.835 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.835 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.835 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.835 11:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.772 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.030 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.599 00:20:08.599 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.599 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.599 11:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.857 { 00:20:08.857 "cntlid": 25, 00:20:08.857 "qid": 0, 00:20:08.857 "state": "enabled", 00:20:08.857 "thread": "nvmf_tgt_poll_group_000", 00:20:08.857 "listen_address": { 00:20:08.857 "trtype": "TCP", 00:20:08.857 "adrfam": "IPv4", 00:20:08.857 "traddr": "10.0.0.2", 00:20:08.857 "trsvcid": "4420" 00:20:08.857 }, 00:20:08.857 "peer_address": { 00:20:08.857 "trtype": "TCP", 00:20:08.857 "adrfam": "IPv4", 00:20:08.857 "traddr": "10.0.0.1", 00:20:08.857 "trsvcid": "35470" 00:20:08.857 }, 00:20:08.857 "auth": { 00:20:08.857 "state": "completed", 00:20:08.857 "digest": "sha256", 00:20:08.857 "dhgroup": "ffdhe4096" 00:20:08.857 } 00:20:08.857 } 00:20:08.857 ]' 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.857 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.115 11:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.050 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.567 00:20:10.567 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.567 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.567 11:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.825 { 00:20:10.825 "cntlid": 27, 00:20:10.825 "qid": 0, 00:20:10.825 "state": "enabled", 00:20:10.825 "thread": "nvmf_tgt_poll_group_000", 00:20:10.825 "listen_address": { 00:20:10.825 "trtype": "TCP", 00:20:10.825 "adrfam": "IPv4", 00:20:10.825 "traddr": "10.0.0.2", 00:20:10.825 "trsvcid": "4420" 00:20:10.825 }, 00:20:10.825 "peer_address": { 00:20:10.825 "trtype": "TCP", 00:20:10.825 "adrfam": "IPv4", 00:20:10.825 "traddr": "10.0.0.1", 00:20:10.825 "trsvcid": "35492" 00:20:10.825 }, 00:20:10.825 "auth": { 00:20:10.825 "state": "completed", 00:20:10.825 "digest": "sha256", 00:20:10.825 "dhgroup": "ffdhe4096" 00:20:10.825 } 00:20:10.825 } 00:20:10.825 ]' 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.825 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.083 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.083 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.083 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.083 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.083 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.341 11:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:12.279 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.279 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.279 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.279 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.279 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.280 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.280 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.280 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.538 11:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.797 00:20:12.797 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.797 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.797 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.055 { 00:20:13.055 "cntlid": 29, 00:20:13.055 "qid": 0, 00:20:13.055 "state": "enabled", 00:20:13.055 "thread": "nvmf_tgt_poll_group_000", 00:20:13.055 "listen_address": { 00:20:13.055 "trtype": "TCP", 00:20:13.055 "adrfam": "IPv4", 00:20:13.055 "traddr": "10.0.0.2", 00:20:13.055 "trsvcid": "4420" 00:20:13.055 }, 00:20:13.055 "peer_address": { 00:20:13.055 "trtype": "TCP", 00:20:13.055 "adrfam": "IPv4", 00:20:13.055 "traddr": "10.0.0.1", 00:20:13.055 "trsvcid": "35514" 00:20:13.055 }, 00:20:13.055 "auth": { 00:20:13.055 "state": "completed", 00:20:13.055 "digest": "sha256", 00:20:13.055 "dhgroup": "ffdhe4096" 00:20:13.055 } 00:20:13.055 } 00:20:13.055 ]' 00:20:13.055 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.313 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.572 11:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.508 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.766 11:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.024 00:20:15.024 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.024 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.024 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.280 { 00:20:15.280 "cntlid": 31, 00:20:15.280 "qid": 0, 00:20:15.280 "state": "enabled", 00:20:15.280 "thread": "nvmf_tgt_poll_group_000", 00:20:15.280 "listen_address": { 00:20:15.280 "trtype": "TCP", 00:20:15.280 "adrfam": "IPv4", 00:20:15.280 "traddr": "10.0.0.2", 00:20:15.280 "trsvcid": "4420" 00:20:15.280 }, 00:20:15.280 "peer_address": { 00:20:15.280 "trtype": "TCP", 00:20:15.280 "adrfam": "IPv4", 00:20:15.280 "traddr": "10.0.0.1", 00:20:15.280 "trsvcid": "35546" 00:20:15.280 }, 00:20:15.280 "auth": { 00:20:15.280 "state": "completed", 00:20:15.280 "digest": "sha256", 00:20:15.280 "dhgroup": "ffdhe4096" 00:20:15.280 } 00:20:15.280 } 00:20:15.280 ]' 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.280 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.537 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.537 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.537 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.795 11:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.731 11:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.731 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.300 00:20:17.300 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.300 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.300 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.558 { 00:20:17.558 "cntlid": 33, 00:20:17.558 "qid": 0, 00:20:17.558 "state": "enabled", 00:20:17.558 "thread": "nvmf_tgt_poll_group_000", 00:20:17.558 "listen_address": { 00:20:17.558 "trtype": "TCP", 00:20:17.558 "adrfam": "IPv4", 00:20:17.558 "traddr": "10.0.0.2", 00:20:17.558 "trsvcid": "4420" 00:20:17.558 }, 00:20:17.558 "peer_address": { 00:20:17.558 "trtype": "TCP", 00:20:17.558 "adrfam": "IPv4", 00:20:17.558 "traddr": "10.0.0.1", 00:20:17.558 "trsvcid": "35576" 00:20:17.558 }, 00:20:17.558 "auth": { 00:20:17.558 "state": "completed", 00:20:17.558 "digest": "sha256", 00:20:17.558 "dhgroup": "ffdhe6144" 00:20:17.558 } 00:20:17.558 } 00:20:17.558 ]' 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.558 11:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.816 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.816 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.816 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.074 11:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.014 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.272 11:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.837 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.837 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.094 11:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.094 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.094 { 00:20:20.095 "cntlid": 35, 00:20:20.095 "qid": 0, 00:20:20.095 "state": "enabled", 00:20:20.095 "thread": "nvmf_tgt_poll_group_000", 00:20:20.095 "listen_address": { 00:20:20.095 "trtype": "TCP", 00:20:20.095 "adrfam": "IPv4", 00:20:20.095 "traddr": "10.0.0.2", 00:20:20.095 "trsvcid": "4420" 00:20:20.095 }, 00:20:20.095 "peer_address": { 00:20:20.095 "trtype": "TCP", 00:20:20.095 "adrfam": "IPv4", 00:20:20.095 "traddr": "10.0.0.1", 00:20:20.095 "trsvcid": "52930" 00:20:20.095 }, 00:20:20.095 "auth": { 00:20:20.095 "state": "completed", 00:20:20.095 "digest": "sha256", 00:20:20.095 "dhgroup": "ffdhe6144" 00:20:20.095 } 00:20:20.095 } 00:20:20.095 ]' 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.095 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.352 11:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.288 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.546 11:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.113 00:20:22.113 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.113 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.113 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.371 { 00:20:22.371 "cntlid": 37, 00:20:22.371 "qid": 0, 00:20:22.371 "state": "enabled", 00:20:22.371 "thread": "nvmf_tgt_poll_group_000", 00:20:22.371 "listen_address": { 00:20:22.371 "trtype": "TCP", 00:20:22.371 "adrfam": "IPv4", 00:20:22.371 "traddr": "10.0.0.2", 00:20:22.371 "trsvcid": "4420" 00:20:22.371 }, 00:20:22.371 "peer_address": { 00:20:22.371 "trtype": "TCP", 00:20:22.371 "adrfam": "IPv4", 00:20:22.371 "traddr": "10.0.0.1", 00:20:22.371 "trsvcid": "52958" 00:20:22.371 }, 00:20:22.371 "auth": { 00:20:22.371 "state": "completed", 00:20:22.371 "digest": "sha256", 00:20:22.371 "dhgroup": "ffdhe6144" 00:20:22.371 } 00:20:22.371 } 00:20:22.371 ]' 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.371 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.629 11:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:23.565 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.565 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.565 11:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.565 11:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.565 11:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.565 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.566 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.566 11:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.823 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.390 00:20:24.390 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.390 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.390 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.647 { 00:20:24.647 "cntlid": 39, 00:20:24.647 "qid": 0, 00:20:24.647 "state": "enabled", 00:20:24.647 "thread": "nvmf_tgt_poll_group_000", 00:20:24.647 "listen_address": { 00:20:24.647 "trtype": "TCP", 00:20:24.647 "adrfam": "IPv4", 00:20:24.647 "traddr": "10.0.0.2", 00:20:24.647 "trsvcid": "4420" 00:20:24.647 }, 00:20:24.647 "peer_address": { 00:20:24.647 "trtype": "TCP", 00:20:24.647 "adrfam": "IPv4", 00:20:24.647 "traddr": "10.0.0.1", 00:20:24.647 "trsvcid": "53000" 00:20:24.647 }, 00:20:24.647 "auth": { 00:20:24.647 "state": "completed", 00:20:24.647 "digest": "sha256", 00:20:24.647 "dhgroup": "ffdhe6144" 00:20:24.647 } 00:20:24.647 } 00:20:24.647 ]' 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.647 11:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.647 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.648 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.648 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.906 11:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.842 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.100 11:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.040 00:20:27.040 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.040 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.040 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.298 { 00:20:27.298 "cntlid": 41, 00:20:27.298 "qid": 0, 00:20:27.298 "state": "enabled", 00:20:27.298 "thread": "nvmf_tgt_poll_group_000", 00:20:27.298 "listen_address": { 00:20:27.298 "trtype": "TCP", 00:20:27.298 "adrfam": "IPv4", 00:20:27.298 "traddr": "10.0.0.2", 00:20:27.298 "trsvcid": "4420" 00:20:27.298 }, 00:20:27.298 "peer_address": { 00:20:27.298 "trtype": "TCP", 00:20:27.298 "adrfam": "IPv4", 00:20:27.298 "traddr": "10.0.0.1", 00:20:27.298 "trsvcid": "53022" 00:20:27.298 }, 00:20:27.298 "auth": { 00:20:27.298 "state": "completed", 00:20:27.298 "digest": "sha256", 00:20:27.298 "dhgroup": "ffdhe8192" 00:20:27.298 } 00:20:27.298 } 00:20:27.298 ]' 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.298 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.558 11:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.492 11:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.751 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.688 00:20:29.688 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.688 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.688 11:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.945 { 00:20:29.945 "cntlid": 43, 00:20:29.945 "qid": 0, 00:20:29.945 "state": "enabled", 00:20:29.945 "thread": "nvmf_tgt_poll_group_000", 00:20:29.945 "listen_address": { 00:20:29.945 "trtype": "TCP", 00:20:29.945 "adrfam": "IPv4", 00:20:29.945 "traddr": "10.0.0.2", 00:20:29.945 "trsvcid": "4420" 00:20:29.945 }, 00:20:29.945 "peer_address": { 00:20:29.945 "trtype": "TCP", 00:20:29.945 "adrfam": "IPv4", 00:20:29.945 "traddr": "10.0.0.1", 00:20:29.945 "trsvcid": "39350" 00:20:29.945 }, 00:20:29.945 "auth": { 00:20:29.945 "state": "completed", 00:20:29.945 "digest": "sha256", 00:20:29.945 "dhgroup": "ffdhe8192" 00:20:29.945 } 00:20:29.945 } 00:20:29.945 ]' 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.945 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.203 11:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.136 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.394 11:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.331 00:20:32.331 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.331 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.331 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.589 { 00:20:32.589 "cntlid": 45, 00:20:32.589 "qid": 0, 00:20:32.589 "state": "enabled", 00:20:32.589 "thread": "nvmf_tgt_poll_group_000", 00:20:32.589 "listen_address": { 00:20:32.589 "trtype": "TCP", 00:20:32.589 "adrfam": "IPv4", 00:20:32.589 "traddr": "10.0.0.2", 00:20:32.589 "trsvcid": "4420" 00:20:32.589 }, 00:20:32.589 "peer_address": { 00:20:32.589 "trtype": "TCP", 00:20:32.589 "adrfam": "IPv4", 00:20:32.589 "traddr": "10.0.0.1", 00:20:32.589 "trsvcid": "39376" 00:20:32.589 }, 00:20:32.589 "auth": { 00:20:32.589 "state": "completed", 00:20:32.589 "digest": "sha256", 00:20:32.589 "dhgroup": "ffdhe8192" 00:20:32.589 } 00:20:32.589 } 00:20:32.589 ]' 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.589 11:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.847 11:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.786 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.044 11:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.983 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.983 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.240 { 00:20:35.240 "cntlid": 47, 00:20:35.240 "qid": 0, 00:20:35.240 "state": "enabled", 00:20:35.240 "thread": "nvmf_tgt_poll_group_000", 00:20:35.240 "listen_address": { 00:20:35.240 "trtype": "TCP", 00:20:35.240 "adrfam": "IPv4", 00:20:35.240 "traddr": "10.0.0.2", 00:20:35.240 "trsvcid": "4420" 00:20:35.240 }, 00:20:35.240 "peer_address": { 00:20:35.240 "trtype": "TCP", 00:20:35.240 "adrfam": "IPv4", 00:20:35.240 "traddr": "10.0.0.1", 00:20:35.240 "trsvcid": "39398" 00:20:35.240 }, 00:20:35.240 "auth": { 00:20:35.240 "state": "completed", 00:20:35.240 "digest": "sha256", 00:20:35.240 "dhgroup": "ffdhe8192" 00:20:35.240 } 00:20:35.240 } 00:20:35.240 ]' 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.240 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.498 11:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.432 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.690 11:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.947 00:20:36.947 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.947 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.947 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.205 { 00:20:37.205 "cntlid": 49, 00:20:37.205 "qid": 0, 00:20:37.205 "state": "enabled", 00:20:37.205 "thread": "nvmf_tgt_poll_group_000", 00:20:37.205 "listen_address": { 00:20:37.205 "trtype": "TCP", 00:20:37.205 "adrfam": "IPv4", 00:20:37.205 "traddr": "10.0.0.2", 00:20:37.205 "trsvcid": "4420" 00:20:37.205 }, 00:20:37.205 "peer_address": { 00:20:37.205 "trtype": "TCP", 00:20:37.205 "adrfam": "IPv4", 00:20:37.205 "traddr": "10.0.0.1", 00:20:37.205 "trsvcid": "39426" 00:20:37.205 }, 00:20:37.205 "auth": { 00:20:37.205 "state": "completed", 00:20:37.205 "digest": "sha384", 00:20:37.205 "dhgroup": "null" 00:20:37.205 } 00:20:37.205 } 00:20:37.205 ]' 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.205 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.463 11:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.402 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.660 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.661 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.661 11:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.661 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.661 11:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.918 00:20:38.918 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.918 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.918 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.176 { 00:20:39.176 "cntlid": 51, 00:20:39.176 "qid": 0, 00:20:39.176 "state": "enabled", 00:20:39.176 "thread": "nvmf_tgt_poll_group_000", 00:20:39.176 "listen_address": { 00:20:39.176 "trtype": "TCP", 00:20:39.176 "adrfam": "IPv4", 00:20:39.176 "traddr": "10.0.0.2", 00:20:39.176 "trsvcid": "4420" 00:20:39.176 }, 00:20:39.176 "peer_address": { 00:20:39.176 "trtype": "TCP", 00:20:39.176 "adrfam": "IPv4", 00:20:39.176 "traddr": "10.0.0.1", 00:20:39.176 "trsvcid": "51806" 00:20:39.176 }, 00:20:39.176 "auth": { 00:20:39.176 "state": "completed", 00:20:39.176 "digest": "sha384", 00:20:39.176 "dhgroup": "null" 00:20:39.176 } 00:20:39.176 } 00:20:39.176 ]' 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.176 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.433 11:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.367 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.625 11:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.885 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.143 { 00:20:41.143 "cntlid": 53, 00:20:41.143 "qid": 0, 00:20:41.143 "state": "enabled", 00:20:41.143 "thread": "nvmf_tgt_poll_group_000", 00:20:41.143 "listen_address": { 00:20:41.143 "trtype": "TCP", 00:20:41.143 "adrfam": "IPv4", 00:20:41.143 "traddr": "10.0.0.2", 00:20:41.143 "trsvcid": "4420" 00:20:41.143 }, 00:20:41.143 "peer_address": { 00:20:41.143 "trtype": "TCP", 00:20:41.143 "adrfam": "IPv4", 00:20:41.143 "traddr": "10.0.0.1", 00:20:41.143 "trsvcid": "51842" 00:20:41.143 }, 00:20:41.143 "auth": { 00:20:41.143 "state": "completed", 00:20:41.143 "digest": "sha384", 00:20:41.143 "dhgroup": "null" 00:20:41.143 } 00:20:41.143 } 00:20:41.143 ]' 00:20:41.143 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.400 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.657 11:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:42.596 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.596 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.596 11:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.596 11:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.596 11:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.596 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.597 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.597 11:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.855 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.115 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.374 { 00:20:43.374 "cntlid": 55, 00:20:43.374 "qid": 0, 00:20:43.374 "state": "enabled", 00:20:43.374 "thread": "nvmf_tgt_poll_group_000", 00:20:43.374 "listen_address": { 00:20:43.374 "trtype": "TCP", 00:20:43.374 "adrfam": "IPv4", 00:20:43.374 "traddr": "10.0.0.2", 00:20:43.374 "trsvcid": "4420" 00:20:43.374 }, 00:20:43.374 "peer_address": { 00:20:43.374 "trtype": "TCP", 00:20:43.374 "adrfam": "IPv4", 00:20:43.374 "traddr": "10.0.0.1", 00:20:43.374 "trsvcid": "51860" 00:20:43.374 }, 00:20:43.374 "auth": { 00:20:43.374 "state": "completed", 00:20:43.374 "digest": "sha384", 00:20:43.374 "dhgroup": "null" 00:20:43.374 } 00:20:43.374 } 00:20:43.374 ]' 00:20:43.374 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.632 11:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.889 11:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.824 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.082 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.340 00:20:45.340 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.340 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.340 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.598 { 00:20:45.598 "cntlid": 57, 00:20:45.598 "qid": 0, 00:20:45.598 "state": "enabled", 00:20:45.598 "thread": "nvmf_tgt_poll_group_000", 00:20:45.598 "listen_address": { 00:20:45.598 "trtype": "TCP", 00:20:45.598 "adrfam": "IPv4", 00:20:45.598 "traddr": "10.0.0.2", 00:20:45.598 "trsvcid": "4420" 00:20:45.598 }, 00:20:45.598 "peer_address": { 00:20:45.598 "trtype": "TCP", 00:20:45.598 "adrfam": "IPv4", 00:20:45.598 "traddr": "10.0.0.1", 00:20:45.598 "trsvcid": "51870" 00:20:45.598 }, 00:20:45.598 "auth": { 00:20:45.598 "state": "completed", 00:20:45.598 "digest": "sha384", 00:20:45.598 "dhgroup": "ffdhe2048" 00:20:45.598 } 00:20:45.598 } 00:20:45.598 ]' 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.598 11:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.598 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.598 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.598 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.856 11:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.792 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.050 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:47.050 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.050 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.051 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.308 00:20:47.308 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.308 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.308 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.566 { 00:20:47.566 "cntlid": 59, 00:20:47.566 "qid": 0, 00:20:47.566 "state": "enabled", 00:20:47.566 "thread": "nvmf_tgt_poll_group_000", 00:20:47.566 "listen_address": { 00:20:47.566 "trtype": "TCP", 00:20:47.566 "adrfam": "IPv4", 00:20:47.566 "traddr": "10.0.0.2", 00:20:47.566 "trsvcid": "4420" 00:20:47.566 }, 00:20:47.566 "peer_address": { 00:20:47.566 "trtype": "TCP", 00:20:47.566 "adrfam": "IPv4", 00:20:47.566 "traddr": "10.0.0.1", 00:20:47.566 "trsvcid": "35964" 00:20:47.566 }, 00:20:47.566 "auth": { 00:20:47.566 "state": "completed", 00:20:47.566 "digest": "sha384", 00:20:47.566 "dhgroup": "ffdhe2048" 00:20:47.566 } 00:20:47.566 } 00:20:47.566 ]' 00:20:47.566 11:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.825 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.082 11:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.018 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.276 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.533 00:20:49.533 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.533 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.533 11:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.790 { 00:20:49.790 "cntlid": 61, 00:20:49.790 "qid": 0, 00:20:49.790 "state": "enabled", 00:20:49.790 "thread": "nvmf_tgt_poll_group_000", 00:20:49.790 "listen_address": { 00:20:49.790 "trtype": "TCP", 00:20:49.790 "adrfam": "IPv4", 00:20:49.790 "traddr": "10.0.0.2", 00:20:49.790 "trsvcid": "4420" 00:20:49.790 }, 00:20:49.790 "peer_address": { 00:20:49.790 "trtype": "TCP", 00:20:49.790 "adrfam": "IPv4", 00:20:49.790 "traddr": "10.0.0.1", 00:20:49.790 "trsvcid": "35996" 00:20:49.790 }, 00:20:49.790 "auth": { 00:20:49.790 "state": "completed", 00:20:49.790 "digest": "sha384", 00:20:49.790 "dhgroup": "ffdhe2048" 00:20:49.790 } 00:20:49.790 } 00:20:49.790 ]' 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.790 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.049 11:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.983 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.241 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.499 00:20:51.499 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.499 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.499 11:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.758 { 00:20:51.758 "cntlid": 63, 00:20:51.758 "qid": 0, 00:20:51.758 "state": "enabled", 00:20:51.758 "thread": "nvmf_tgt_poll_group_000", 00:20:51.758 "listen_address": { 00:20:51.758 "trtype": "TCP", 00:20:51.758 "adrfam": "IPv4", 00:20:51.758 "traddr": "10.0.0.2", 00:20:51.758 "trsvcid": "4420" 00:20:51.758 }, 00:20:51.758 "peer_address": { 00:20:51.758 "trtype": "TCP", 00:20:51.758 "adrfam": "IPv4", 00:20:51.758 "traddr": "10.0.0.1", 00:20:51.758 "trsvcid": "36014" 00:20:51.758 }, 00:20:51.758 "auth": { 00:20:51.758 "state": "completed", 00:20:51.758 "digest": "sha384", 00:20:51.758 "dhgroup": "ffdhe2048" 00:20:51.758 } 00:20:51.758 } 00:20:51.758 ]' 00:20:51.758 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.017 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.276 11:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.213 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.471 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.730 00:20:53.730 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.730 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.730 11:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.988 { 00:20:53.988 "cntlid": 65, 00:20:53.988 "qid": 0, 00:20:53.988 "state": "enabled", 00:20:53.988 "thread": "nvmf_tgt_poll_group_000", 00:20:53.988 "listen_address": { 00:20:53.988 "trtype": "TCP", 00:20:53.988 "adrfam": "IPv4", 00:20:53.988 "traddr": "10.0.0.2", 00:20:53.988 "trsvcid": "4420" 00:20:53.988 }, 00:20:53.988 "peer_address": { 00:20:53.988 "trtype": "TCP", 00:20:53.988 "adrfam": "IPv4", 00:20:53.988 "traddr": "10.0.0.1", 00:20:53.988 "trsvcid": "36046" 00:20:53.988 }, 00:20:53.988 "auth": { 00:20:53.988 "state": "completed", 00:20:53.988 "digest": "sha384", 00:20:53.988 "dhgroup": "ffdhe3072" 00:20:53.988 } 00:20:53.988 } 00:20:53.988 ]' 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.988 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.577 11:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.213 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.498 11:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.756 00:20:55.756 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.756 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.756 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.014 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.014 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.014 11:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.014 11:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.014 11:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.014 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.014 { 00:20:56.014 "cntlid": 67, 00:20:56.014 "qid": 0, 00:20:56.014 "state": "enabled", 00:20:56.014 "thread": "nvmf_tgt_poll_group_000", 00:20:56.014 "listen_address": { 00:20:56.014 "trtype": "TCP", 00:20:56.014 "adrfam": "IPv4", 00:20:56.014 "traddr": "10.0.0.2", 00:20:56.014 "trsvcid": "4420" 00:20:56.014 }, 00:20:56.014 "peer_address": { 00:20:56.014 "trtype": "TCP", 00:20:56.014 "adrfam": "IPv4", 00:20:56.014 "traddr": "10.0.0.1", 00:20:56.014 "trsvcid": "36078" 00:20:56.014 }, 00:20:56.014 "auth": { 00:20:56.014 "state": "completed", 00:20:56.014 "digest": "sha384", 00:20:56.014 "dhgroup": "ffdhe3072" 00:20:56.014 } 00:20:56.015 } 00:20:56.015 ]' 00:20:56.015 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.273 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.530 11:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.465 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.726 11:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.985 00:20:57.985 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.985 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.985 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.243 { 00:20:58.243 "cntlid": 69, 00:20:58.243 "qid": 0, 00:20:58.243 "state": "enabled", 00:20:58.243 "thread": "nvmf_tgt_poll_group_000", 00:20:58.243 "listen_address": { 00:20:58.243 "trtype": "TCP", 00:20:58.243 "adrfam": "IPv4", 00:20:58.243 "traddr": "10.0.0.2", 00:20:58.243 "trsvcid": "4420" 00:20:58.243 }, 00:20:58.243 "peer_address": { 00:20:58.243 "trtype": "TCP", 00:20:58.243 "adrfam": "IPv4", 00:20:58.243 "traddr": "10.0.0.1", 00:20:58.243 "trsvcid": "47060" 00:20:58.243 }, 00:20:58.243 "auth": { 00:20:58.243 "state": "completed", 00:20:58.243 "digest": "sha384", 00:20:58.243 "dhgroup": "ffdhe3072" 00:20:58.243 } 00:20:58.243 } 00:20:58.243 ]' 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.243 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.503 11:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.436 11:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.693 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.951 00:20:59.951 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.951 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.951 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.209 { 00:21:00.209 "cntlid": 71, 00:21:00.209 "qid": 0, 00:21:00.209 "state": "enabled", 00:21:00.209 "thread": "nvmf_tgt_poll_group_000", 00:21:00.209 "listen_address": { 00:21:00.209 "trtype": "TCP", 00:21:00.209 "adrfam": "IPv4", 00:21:00.209 "traddr": "10.0.0.2", 00:21:00.209 "trsvcid": "4420" 00:21:00.209 }, 00:21:00.209 "peer_address": { 00:21:00.209 "trtype": "TCP", 00:21:00.209 "adrfam": "IPv4", 00:21:00.209 "traddr": "10.0.0.1", 00:21:00.209 "trsvcid": "47088" 00:21:00.209 }, 00:21:00.209 "auth": { 00:21:00.209 "state": "completed", 00:21:00.209 "digest": "sha384", 00:21:00.209 "dhgroup": "ffdhe3072" 00:21:00.209 } 00:21:00.209 } 00:21:00.209 ]' 00:21:00.209 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.467 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.725 11:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.661 11:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.661 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.227 00:21:02.227 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.227 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.227 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.486 { 00:21:02.486 "cntlid": 73, 00:21:02.486 "qid": 0, 00:21:02.486 "state": "enabled", 00:21:02.486 "thread": "nvmf_tgt_poll_group_000", 00:21:02.486 "listen_address": { 00:21:02.486 "trtype": "TCP", 00:21:02.486 "adrfam": "IPv4", 00:21:02.486 "traddr": "10.0.0.2", 00:21:02.486 "trsvcid": "4420" 00:21:02.486 }, 00:21:02.486 "peer_address": { 00:21:02.486 "trtype": "TCP", 00:21:02.486 "adrfam": "IPv4", 00:21:02.486 "traddr": "10.0.0.1", 00:21:02.486 "trsvcid": "47116" 00:21:02.486 }, 00:21:02.486 "auth": { 00:21:02.486 "state": "completed", 00:21:02.486 "digest": "sha384", 00:21:02.486 "dhgroup": "ffdhe4096" 00:21:02.486 } 00:21:02.486 } 00:21:02.486 ]' 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.486 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.487 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.487 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.487 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.487 11:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.745 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.682 11:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.940 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.198 00:21:04.198 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.198 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.198 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.455 { 00:21:04.455 "cntlid": 75, 00:21:04.455 "qid": 0, 00:21:04.455 "state": "enabled", 00:21:04.455 "thread": "nvmf_tgt_poll_group_000", 00:21:04.455 "listen_address": { 00:21:04.455 "trtype": "TCP", 00:21:04.455 "adrfam": "IPv4", 00:21:04.455 "traddr": "10.0.0.2", 00:21:04.455 "trsvcid": "4420" 00:21:04.455 }, 00:21:04.455 "peer_address": { 00:21:04.455 "trtype": "TCP", 00:21:04.455 "adrfam": "IPv4", 00:21:04.455 "traddr": "10.0.0.1", 00:21:04.455 "trsvcid": "47156" 00:21:04.455 }, 00:21:04.455 "auth": { 00:21:04.455 "state": "completed", 00:21:04.455 "digest": "sha384", 00:21:04.455 "dhgroup": "ffdhe4096" 00:21:04.455 } 00:21:04.455 } 00:21:04.455 ]' 00:21:04.455 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.712 11:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.969 11:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.906 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.164 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.421 00:21:06.421 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.421 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.421 11:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.679 { 00:21:06.679 "cntlid": 77, 00:21:06.679 "qid": 0, 00:21:06.679 "state": "enabled", 00:21:06.679 "thread": "nvmf_tgt_poll_group_000", 00:21:06.679 "listen_address": { 00:21:06.679 "trtype": "TCP", 00:21:06.679 "adrfam": "IPv4", 00:21:06.679 "traddr": "10.0.0.2", 00:21:06.679 "trsvcid": "4420" 00:21:06.679 }, 00:21:06.679 "peer_address": { 00:21:06.679 "trtype": "TCP", 00:21:06.679 "adrfam": "IPv4", 00:21:06.679 "traddr": "10.0.0.1", 00:21:06.679 "trsvcid": "47176" 00:21:06.679 }, 00:21:06.679 "auth": { 00:21:06.679 "state": "completed", 00:21:06.679 "digest": "sha384", 00:21:06.679 "dhgroup": "ffdhe4096" 00:21:06.679 } 00:21:06.679 } 00:21:06.679 ]' 00:21:06.679 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.937 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.197 11:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.135 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.393 11:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.651 00:21:08.651 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.651 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.651 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.909 { 00:21:08.909 "cntlid": 79, 00:21:08.909 "qid": 0, 00:21:08.909 "state": "enabled", 00:21:08.909 "thread": "nvmf_tgt_poll_group_000", 00:21:08.909 "listen_address": { 00:21:08.909 "trtype": "TCP", 00:21:08.909 "adrfam": "IPv4", 00:21:08.909 "traddr": "10.0.0.2", 00:21:08.909 "trsvcid": "4420" 00:21:08.909 }, 00:21:08.909 "peer_address": { 00:21:08.909 "trtype": "TCP", 00:21:08.909 "adrfam": "IPv4", 00:21:08.909 "traddr": "10.0.0.1", 00:21:08.909 "trsvcid": "60392" 00:21:08.909 }, 00:21:08.909 "auth": { 00:21:08.909 "state": "completed", 00:21:08.909 "digest": "sha384", 00:21:08.909 "dhgroup": "ffdhe4096" 00:21:08.909 } 00:21:08.909 } 00:21:08.909 ]' 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.909 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.167 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.167 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.167 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.167 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.167 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.426 11:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.363 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.621 11:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.878 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.137 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.137 { 00:21:11.137 "cntlid": 81, 00:21:11.137 "qid": 0, 00:21:11.137 "state": "enabled", 00:21:11.137 "thread": "nvmf_tgt_poll_group_000", 00:21:11.137 "listen_address": { 00:21:11.137 "trtype": "TCP", 00:21:11.137 "adrfam": "IPv4", 00:21:11.137 "traddr": "10.0.0.2", 00:21:11.137 "trsvcid": "4420" 00:21:11.137 }, 00:21:11.137 "peer_address": { 00:21:11.137 "trtype": "TCP", 00:21:11.137 "adrfam": "IPv4", 00:21:11.137 "traddr": "10.0.0.1", 00:21:11.137 "trsvcid": "60436" 00:21:11.137 }, 00:21:11.137 "auth": { 00:21:11.137 "state": "completed", 00:21:11.137 "digest": "sha384", 00:21:11.137 "dhgroup": "ffdhe6144" 00:21:11.137 } 00:21:11.137 } 00:21:11.137 ]' 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.395 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.652 11:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.589 11:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.847 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.416 00:21:13.416 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.416 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.416 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.675 { 00:21:13.675 "cntlid": 83, 00:21:13.675 "qid": 0, 00:21:13.675 "state": "enabled", 00:21:13.675 "thread": "nvmf_tgt_poll_group_000", 00:21:13.675 "listen_address": { 00:21:13.675 "trtype": "TCP", 00:21:13.675 "adrfam": "IPv4", 00:21:13.675 "traddr": "10.0.0.2", 00:21:13.675 "trsvcid": "4420" 00:21:13.675 }, 00:21:13.675 "peer_address": { 00:21:13.675 "trtype": "TCP", 00:21:13.675 "adrfam": "IPv4", 00:21:13.675 "traddr": "10.0.0.1", 00:21:13.675 "trsvcid": "60458" 00:21:13.675 }, 00:21:13.675 "auth": { 00:21:13.675 "state": "completed", 00:21:13.675 "digest": "sha384", 00:21:13.675 "dhgroup": "ffdhe6144" 00:21:13.675 } 00:21:13.675 } 00:21:13.675 ]' 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.675 11:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.675 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.675 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.675 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.934 11:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.877 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.135 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.703 00:21:15.703 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.703 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.703 11:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.961 { 00:21:15.961 "cntlid": 85, 00:21:15.961 "qid": 0, 00:21:15.961 "state": "enabled", 00:21:15.961 "thread": "nvmf_tgt_poll_group_000", 00:21:15.961 "listen_address": { 00:21:15.961 "trtype": "TCP", 00:21:15.961 "adrfam": "IPv4", 00:21:15.961 "traddr": "10.0.0.2", 00:21:15.961 "trsvcid": "4420" 00:21:15.961 }, 00:21:15.961 "peer_address": { 00:21:15.961 "trtype": "TCP", 00:21:15.961 "adrfam": "IPv4", 00:21:15.961 "traddr": "10.0.0.1", 00:21:15.961 "trsvcid": "60484" 00:21:15.961 }, 00:21:15.961 "auth": { 00:21:15.961 "state": "completed", 00:21:15.961 "digest": "sha384", 00:21:15.961 "dhgroup": "ffdhe6144" 00:21:15.961 } 00:21:15.961 } 00:21:15.961 ]' 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.961 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.220 11:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.154 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.412 11:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.977 00:21:17.977 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.977 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.977 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.235 { 00:21:18.235 "cntlid": 87, 00:21:18.235 "qid": 0, 00:21:18.235 "state": "enabled", 00:21:18.235 "thread": "nvmf_tgt_poll_group_000", 00:21:18.235 "listen_address": { 00:21:18.235 "trtype": "TCP", 00:21:18.235 "adrfam": "IPv4", 00:21:18.235 "traddr": "10.0.0.2", 00:21:18.235 "trsvcid": "4420" 00:21:18.235 }, 00:21:18.235 "peer_address": { 00:21:18.235 "trtype": "TCP", 00:21:18.235 "adrfam": "IPv4", 00:21:18.235 "traddr": "10.0.0.1", 00:21:18.235 "trsvcid": "53540" 00:21:18.235 }, 00:21:18.235 "auth": { 00:21:18.235 "state": "completed", 00:21:18.235 "digest": "sha384", 00:21:18.235 "dhgroup": "ffdhe6144" 00:21:18.235 } 00:21:18.235 } 00:21:18.235 ]' 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:18.235 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.495 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.495 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.495 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.755 11:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.691 11:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.949 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.882 00:21:20.882 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.882 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.882 11:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.882 { 00:21:20.882 "cntlid": 89, 00:21:20.882 "qid": 0, 00:21:20.882 "state": "enabled", 00:21:20.882 "thread": "nvmf_tgt_poll_group_000", 00:21:20.882 "listen_address": { 00:21:20.882 "trtype": "TCP", 00:21:20.882 "adrfam": "IPv4", 00:21:20.882 "traddr": "10.0.0.2", 00:21:20.882 "trsvcid": "4420" 00:21:20.882 }, 00:21:20.882 "peer_address": { 00:21:20.882 "trtype": "TCP", 00:21:20.882 "adrfam": "IPv4", 00:21:20.882 "traddr": "10.0.0.1", 00:21:20.882 "trsvcid": "53580" 00:21:20.882 }, 00:21:20.882 "auth": { 00:21:20.882 "state": "completed", 00:21:20.882 "digest": "sha384", 00:21:20.882 "dhgroup": "ffdhe8192" 00:21:20.882 } 00:21:20.882 } 00:21:20.882 ]' 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.882 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.883 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.140 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.140 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.140 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.140 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.140 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.397 11:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.334 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.592 11:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.527 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.527 { 00:21:23.527 "cntlid": 91, 00:21:23.527 "qid": 0, 00:21:23.527 "state": "enabled", 00:21:23.527 "thread": "nvmf_tgt_poll_group_000", 00:21:23.527 "listen_address": { 00:21:23.527 "trtype": "TCP", 00:21:23.527 "adrfam": "IPv4", 00:21:23.527 "traddr": "10.0.0.2", 00:21:23.527 "trsvcid": "4420" 00:21:23.527 }, 00:21:23.527 "peer_address": { 00:21:23.527 "trtype": "TCP", 00:21:23.527 "adrfam": "IPv4", 00:21:23.527 "traddr": "10.0.0.1", 00:21:23.527 "trsvcid": "53602" 00:21:23.527 }, 00:21:23.527 "auth": { 00:21:23.527 "state": "completed", 00:21:23.527 "digest": "sha384", 00:21:23.527 "dhgroup": "ffdhe8192" 00:21:23.527 } 00:21:23.527 } 00:21:23.527 ]' 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.527 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.785 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.785 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.785 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.785 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.785 11:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.043 11:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.979 11:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.918 00:21:25.918 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.918 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.918 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.176 { 00:21:26.176 "cntlid": 93, 00:21:26.176 "qid": 0, 00:21:26.176 "state": "enabled", 00:21:26.176 "thread": "nvmf_tgt_poll_group_000", 00:21:26.176 "listen_address": { 00:21:26.176 "trtype": "TCP", 00:21:26.176 "adrfam": "IPv4", 00:21:26.176 "traddr": "10.0.0.2", 00:21:26.176 "trsvcid": "4420" 00:21:26.176 }, 00:21:26.176 "peer_address": { 00:21:26.176 "trtype": "TCP", 00:21:26.176 "adrfam": "IPv4", 00:21:26.176 "traddr": "10.0.0.1", 00:21:26.176 "trsvcid": "53630" 00:21:26.176 }, 00:21:26.176 "auth": { 00:21:26.176 "state": "completed", 00:21:26.176 "digest": "sha384", 00:21:26.176 "dhgroup": "ffdhe8192" 00:21:26.176 } 00:21:26.176 } 00:21:26.176 ]' 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.176 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.436 11:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:21:27.370 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.370 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.370 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.370 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.630 11:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.630 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.630 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.630 11:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.630 11:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.890 11:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.890 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.890 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.457 00:21:28.457 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.457 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.457 11:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.716 { 00:21:28.716 "cntlid": 95, 00:21:28.716 "qid": 0, 00:21:28.716 "state": "enabled", 00:21:28.716 "thread": "nvmf_tgt_poll_group_000", 00:21:28.716 "listen_address": { 00:21:28.716 "trtype": "TCP", 00:21:28.716 "adrfam": "IPv4", 00:21:28.716 "traddr": "10.0.0.2", 00:21:28.716 "trsvcid": "4420" 00:21:28.716 }, 00:21:28.716 "peer_address": { 00:21:28.716 "trtype": "TCP", 00:21:28.716 "adrfam": "IPv4", 00:21:28.716 "traddr": "10.0.0.1", 00:21:28.716 "trsvcid": "44124" 00:21:28.716 }, 00:21:28.716 "auth": { 00:21:28.716 "state": "completed", 00:21:28.716 "digest": "sha384", 00:21:28.716 "dhgroup": "ffdhe8192" 00:21:28.716 } 00:21:28.716 } 00:21:28.716 ]' 00:21:28.716 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.974 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.232 11:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:30.167 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.426 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.426 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.426 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.426 11:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.426 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.426 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.684 00:21:30.684 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.684 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.684 11:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.942 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.942 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.942 11:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.942 11:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.942 11:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.942 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.942 { 00:21:30.942 "cntlid": 97, 00:21:30.942 "qid": 0, 00:21:30.942 "state": "enabled", 00:21:30.942 "thread": "nvmf_tgt_poll_group_000", 00:21:30.942 "listen_address": { 00:21:30.942 "trtype": "TCP", 00:21:30.942 "adrfam": "IPv4", 00:21:30.942 "traddr": "10.0.0.2", 00:21:30.943 "trsvcid": "4420" 00:21:30.943 }, 00:21:30.943 "peer_address": { 00:21:30.943 "trtype": "TCP", 00:21:30.943 "adrfam": "IPv4", 00:21:30.943 "traddr": "10.0.0.1", 00:21:30.943 "trsvcid": "44148" 00:21:30.943 }, 00:21:30.943 "auth": { 00:21:30.943 "state": "completed", 00:21:30.943 "digest": "sha512", 00:21:30.943 "dhgroup": "null" 00:21:30.943 } 00:21:30.943 } 00:21:30.943 ]' 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.943 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.201 11:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.139 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.140 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.398 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.656 00:21:32.656 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.656 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.656 11:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.914 { 00:21:32.914 "cntlid": 99, 00:21:32.914 "qid": 0, 00:21:32.914 "state": "enabled", 00:21:32.914 "thread": "nvmf_tgt_poll_group_000", 00:21:32.914 "listen_address": { 00:21:32.914 "trtype": "TCP", 00:21:32.914 "adrfam": "IPv4", 00:21:32.914 "traddr": "10.0.0.2", 00:21:32.914 "trsvcid": "4420" 00:21:32.914 }, 00:21:32.914 "peer_address": { 00:21:32.914 "trtype": "TCP", 00:21:32.914 "adrfam": "IPv4", 00:21:32.914 "traddr": "10.0.0.1", 00:21:32.914 "trsvcid": "44174" 00:21:32.914 }, 00:21:32.914 "auth": { 00:21:32.914 "state": "completed", 00:21:32.914 "digest": "sha512", 00:21:32.914 "dhgroup": "null" 00:21:32.914 } 00:21:32.914 } 00:21:32.914 ]' 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.914 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.173 11:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.111 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.370 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.627 00:21:34.627 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.627 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.627 11:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.884 { 00:21:34.884 "cntlid": 101, 00:21:34.884 "qid": 0, 00:21:34.884 "state": "enabled", 00:21:34.884 "thread": "nvmf_tgt_poll_group_000", 00:21:34.884 "listen_address": { 00:21:34.884 "trtype": "TCP", 00:21:34.884 "adrfam": "IPv4", 00:21:34.884 "traddr": "10.0.0.2", 00:21:34.884 "trsvcid": "4420" 00:21:34.884 }, 00:21:34.884 "peer_address": { 00:21:34.884 "trtype": "TCP", 00:21:34.884 "adrfam": "IPv4", 00:21:34.884 "traddr": "10.0.0.1", 00:21:34.884 "trsvcid": "44214" 00:21:34.884 }, 00:21:34.884 "auth": { 00:21:34.884 "state": "completed", 00:21:34.884 "digest": "sha512", 00:21:34.884 "dhgroup": "null" 00:21:34.884 } 00:21:34.884 } 00:21:34.884 ]' 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:34.884 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.145 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.145 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.145 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.404 11:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.341 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.599 11:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.858 00:21:36.858 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.858 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.858 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.116 { 00:21:37.116 "cntlid": 103, 00:21:37.116 "qid": 0, 00:21:37.116 "state": "enabled", 00:21:37.116 "thread": "nvmf_tgt_poll_group_000", 00:21:37.116 "listen_address": { 00:21:37.116 "trtype": "TCP", 00:21:37.116 "adrfam": "IPv4", 00:21:37.116 "traddr": "10.0.0.2", 00:21:37.116 "trsvcid": "4420" 00:21:37.116 }, 00:21:37.116 "peer_address": { 00:21:37.116 "trtype": "TCP", 00:21:37.116 "adrfam": "IPv4", 00:21:37.116 "traddr": "10.0.0.1", 00:21:37.116 "trsvcid": "44236" 00:21:37.116 }, 00:21:37.116 "auth": { 00:21:37.116 "state": "completed", 00:21:37.116 "digest": "sha512", 00:21:37.116 "dhgroup": "null" 00:21:37.116 } 00:21:37.116 } 00:21:37.116 ]' 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:37.116 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.375 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.375 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.375 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.633 11:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.572 11:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.139 00:21:39.139 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.139 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.139 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.396 { 00:21:39.396 "cntlid": 105, 00:21:39.396 "qid": 0, 00:21:39.396 "state": "enabled", 00:21:39.396 "thread": "nvmf_tgt_poll_group_000", 00:21:39.396 "listen_address": { 00:21:39.396 "trtype": "TCP", 00:21:39.396 "adrfam": "IPv4", 00:21:39.396 "traddr": "10.0.0.2", 00:21:39.396 "trsvcid": "4420" 00:21:39.396 }, 00:21:39.396 "peer_address": { 00:21:39.396 "trtype": "TCP", 00:21:39.396 "adrfam": "IPv4", 00:21:39.396 "traddr": "10.0.0.1", 00:21:39.396 "trsvcid": "36396" 00:21:39.396 }, 00:21:39.396 "auth": { 00:21:39.396 "state": "completed", 00:21:39.396 "digest": "sha512", 00:21:39.396 "dhgroup": "ffdhe2048" 00:21:39.396 } 00:21:39.396 } 00:21:39.396 ]' 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.396 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.656 11:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.594 11:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.852 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.110 00:21:41.110 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.110 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.110 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.367 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.367 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.367 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.367 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.367 11:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.367 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.367 { 00:21:41.367 "cntlid": 107, 00:21:41.367 "qid": 0, 00:21:41.367 "state": "enabled", 00:21:41.367 "thread": "nvmf_tgt_poll_group_000", 00:21:41.367 "listen_address": { 00:21:41.367 "trtype": "TCP", 00:21:41.367 "adrfam": "IPv4", 00:21:41.367 "traddr": "10.0.0.2", 00:21:41.367 "trsvcid": "4420" 00:21:41.367 }, 00:21:41.367 "peer_address": { 00:21:41.367 "trtype": "TCP", 00:21:41.367 "adrfam": "IPv4", 00:21:41.367 "traddr": "10.0.0.1", 00:21:41.367 "trsvcid": "36422" 00:21:41.368 }, 00:21:41.368 "auth": { 00:21:41.368 "state": "completed", 00:21:41.368 "digest": "sha512", 00:21:41.368 "dhgroup": "ffdhe2048" 00:21:41.368 } 00:21:41.368 } 00:21:41.368 ]' 00:21:41.368 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.368 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.368 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.368 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.368 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.627 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.627 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.627 11:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.627 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:42.564 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.565 11:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.823 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.080 00:21:43.081 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.081 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.081 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.340 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.340 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.340 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.340 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.598 { 00:21:43.598 "cntlid": 109, 00:21:43.598 "qid": 0, 00:21:43.598 "state": "enabled", 00:21:43.598 "thread": "nvmf_tgt_poll_group_000", 00:21:43.598 "listen_address": { 00:21:43.598 "trtype": "TCP", 00:21:43.598 "adrfam": "IPv4", 00:21:43.598 "traddr": "10.0.0.2", 00:21:43.598 "trsvcid": "4420" 00:21:43.598 }, 00:21:43.598 "peer_address": { 00:21:43.598 "trtype": "TCP", 00:21:43.598 "adrfam": "IPv4", 00:21:43.598 "traddr": "10.0.0.1", 00:21:43.598 "trsvcid": "36434" 00:21:43.598 }, 00:21:43.598 "auth": { 00:21:43.598 "state": "completed", 00:21:43.598 "digest": "sha512", 00:21:43.598 "dhgroup": "ffdhe2048" 00:21:43.598 } 00:21:43.598 } 00:21:43.598 ]' 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.598 11:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.856 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.789 11:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.049 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.307 00:21:45.307 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.307 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.307 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.565 { 00:21:45.565 "cntlid": 111, 00:21:45.565 "qid": 0, 00:21:45.565 "state": "enabled", 00:21:45.565 "thread": "nvmf_tgt_poll_group_000", 00:21:45.565 "listen_address": { 00:21:45.565 "trtype": "TCP", 00:21:45.565 "adrfam": "IPv4", 00:21:45.565 "traddr": "10.0.0.2", 00:21:45.565 "trsvcid": "4420" 00:21:45.565 }, 00:21:45.565 "peer_address": { 00:21:45.565 "trtype": "TCP", 00:21:45.565 "adrfam": "IPv4", 00:21:45.565 "traddr": "10.0.0.1", 00:21:45.565 "trsvcid": "36458" 00:21:45.565 }, 00:21:45.565 "auth": { 00:21:45.565 "state": "completed", 00:21:45.565 "digest": "sha512", 00:21:45.565 "dhgroup": "ffdhe2048" 00:21:45.565 } 00:21:45.565 } 00:21:45.565 ]' 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.565 11:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.823 11:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.758 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:46.759 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.015 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.273 00:21:47.273 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.273 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.273 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.530 { 00:21:47.530 "cntlid": 113, 00:21:47.530 "qid": 0, 00:21:47.530 "state": "enabled", 00:21:47.530 "thread": "nvmf_tgt_poll_group_000", 00:21:47.530 "listen_address": { 00:21:47.530 "trtype": "TCP", 00:21:47.530 "adrfam": "IPv4", 00:21:47.530 "traddr": "10.0.0.2", 00:21:47.530 "trsvcid": "4420" 00:21:47.530 }, 00:21:47.530 "peer_address": { 00:21:47.530 "trtype": "TCP", 00:21:47.530 "adrfam": "IPv4", 00:21:47.530 "traddr": "10.0.0.1", 00:21:47.530 "trsvcid": "41438" 00:21:47.530 }, 00:21:47.530 "auth": { 00:21:47.530 "state": "completed", 00:21:47.530 "digest": "sha512", 00:21:47.530 "dhgroup": "ffdhe3072" 00:21:47.530 } 00:21:47.530 } 00:21:47.530 ]' 00:21:47.530 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.788 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.788 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.788 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.788 11:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.788 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.788 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.788 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.047 11:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.984 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.248 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.507 00:21:49.507 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.507 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.507 11:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.764 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.764 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.764 11:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.764 11:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.765 { 00:21:49.765 "cntlid": 115, 00:21:49.765 "qid": 0, 00:21:49.765 "state": "enabled", 00:21:49.765 "thread": "nvmf_tgt_poll_group_000", 00:21:49.765 "listen_address": { 00:21:49.765 "trtype": "TCP", 00:21:49.765 "adrfam": "IPv4", 00:21:49.765 "traddr": "10.0.0.2", 00:21:49.765 "trsvcid": "4420" 00:21:49.765 }, 00:21:49.765 "peer_address": { 00:21:49.765 "trtype": "TCP", 00:21:49.765 "adrfam": "IPv4", 00:21:49.765 "traddr": "10.0.0.1", 00:21:49.765 "trsvcid": "41458" 00:21:49.765 }, 00:21:49.765 "auth": { 00:21:49.765 "state": "completed", 00:21:49.765 "digest": "sha512", 00:21:49.765 "dhgroup": "ffdhe3072" 00:21:49.765 } 00:21:49.765 } 00:21:49.765 ]' 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.765 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.023 11:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.957 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.215 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.472 00:21:51.472 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.472 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.472 11:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.729 { 00:21:51.729 "cntlid": 117, 00:21:51.729 "qid": 0, 00:21:51.729 "state": "enabled", 00:21:51.729 "thread": "nvmf_tgt_poll_group_000", 00:21:51.729 "listen_address": { 00:21:51.729 "trtype": "TCP", 00:21:51.729 "adrfam": "IPv4", 00:21:51.729 "traddr": "10.0.0.2", 00:21:51.729 "trsvcid": "4420" 00:21:51.729 }, 00:21:51.729 "peer_address": { 00:21:51.729 "trtype": "TCP", 00:21:51.729 "adrfam": "IPv4", 00:21:51.729 "traddr": "10.0.0.1", 00:21:51.729 "trsvcid": "41482" 00:21:51.729 }, 00:21:51.729 "auth": { 00:21:51.729 "state": "completed", 00:21:51.729 "digest": "sha512", 00:21:51.729 "dhgroup": "ffdhe3072" 00:21:51.729 } 00:21:51.729 } 00:21:51.729 ]' 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.729 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.988 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.988 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.988 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.248 11:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.183 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.751 00:21:53.751 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.751 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.751 11:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.010 { 00:21:54.010 "cntlid": 119, 00:21:54.010 "qid": 0, 00:21:54.010 "state": "enabled", 00:21:54.010 "thread": "nvmf_tgt_poll_group_000", 00:21:54.010 "listen_address": { 00:21:54.010 "trtype": "TCP", 00:21:54.010 "adrfam": "IPv4", 00:21:54.010 "traddr": "10.0.0.2", 00:21:54.010 "trsvcid": "4420" 00:21:54.010 }, 00:21:54.010 "peer_address": { 00:21:54.010 "trtype": "TCP", 00:21:54.010 "adrfam": "IPv4", 00:21:54.010 "traddr": "10.0.0.1", 00:21:54.010 "trsvcid": "41502" 00:21:54.010 }, 00:21:54.010 "auth": { 00:21:54.010 "state": "completed", 00:21:54.010 "digest": "sha512", 00:21:54.010 "dhgroup": "ffdhe3072" 00:21:54.010 } 00:21:54.010 } 00:21:54.010 ]' 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.010 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.268 11:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.205 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.206 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.206 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.476 11:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.742 00:21:55.742 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.742 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.742 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.014 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.015 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.015 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.015 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.015 11:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.015 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.015 { 00:21:56.015 "cntlid": 121, 00:21:56.015 "qid": 0, 00:21:56.015 "state": "enabled", 00:21:56.015 "thread": "nvmf_tgt_poll_group_000", 00:21:56.015 "listen_address": { 00:21:56.015 "trtype": "TCP", 00:21:56.015 "adrfam": "IPv4", 00:21:56.015 "traddr": "10.0.0.2", 00:21:56.015 "trsvcid": "4420" 00:21:56.015 }, 00:21:56.015 "peer_address": { 00:21:56.015 "trtype": "TCP", 00:21:56.015 "adrfam": "IPv4", 00:21:56.015 "traddr": "10.0.0.1", 00:21:56.015 "trsvcid": "41536" 00:21:56.015 }, 00:21:56.015 "auth": { 00:21:56.015 "state": "completed", 00:21:56.015 "digest": "sha512", 00:21:56.015 "dhgroup": "ffdhe4096" 00:21:56.015 } 00:21:56.015 } 00:21:56.015 ]' 00:21:56.015 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.304 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.591 11:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.557 11:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.125 00:21:58.125 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.125 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.125 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.383 { 00:21:58.383 "cntlid": 123, 00:21:58.383 "qid": 0, 00:21:58.383 "state": "enabled", 00:21:58.383 "thread": "nvmf_tgt_poll_group_000", 00:21:58.383 "listen_address": { 00:21:58.383 "trtype": "TCP", 00:21:58.383 "adrfam": "IPv4", 00:21:58.383 "traddr": "10.0.0.2", 00:21:58.383 "trsvcid": "4420" 00:21:58.383 }, 00:21:58.383 "peer_address": { 00:21:58.383 "trtype": "TCP", 00:21:58.383 "adrfam": "IPv4", 00:21:58.383 "traddr": "10.0.0.1", 00:21:58.383 "trsvcid": "48064" 00:21:58.383 }, 00:21:58.383 "auth": { 00:21:58.383 "state": "completed", 00:21:58.383 "digest": "sha512", 00:21:58.383 "dhgroup": "ffdhe4096" 00:21:58.383 } 00:21:58.383 } 00:21:58.383 ]' 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.383 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.641 11:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.576 11:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.833 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.398 00:22:00.398 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.398 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.398 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.656 { 00:22:00.656 "cntlid": 125, 00:22:00.656 "qid": 0, 00:22:00.656 "state": "enabled", 00:22:00.656 "thread": "nvmf_tgt_poll_group_000", 00:22:00.656 "listen_address": { 00:22:00.656 "trtype": "TCP", 00:22:00.656 "adrfam": "IPv4", 00:22:00.656 "traddr": "10.0.0.2", 00:22:00.656 "trsvcid": "4420" 00:22:00.656 }, 00:22:00.656 "peer_address": { 00:22:00.656 "trtype": "TCP", 00:22:00.656 "adrfam": "IPv4", 00:22:00.656 "traddr": "10.0.0.1", 00:22:00.656 "trsvcid": "48088" 00:22:00.656 }, 00:22:00.656 "auth": { 00:22:00.656 "state": "completed", 00:22:00.656 "digest": "sha512", 00:22:00.656 "dhgroup": "ffdhe4096" 00:22:00.656 } 00:22:00.656 } 00:22:00.656 ]' 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.656 11:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.656 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.656 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.656 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.656 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.656 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.916 11:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.853 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.113 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.371 11:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.371 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.371 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.627 00:22:02.627 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.627 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.627 11:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.884 { 00:22:02.884 "cntlid": 127, 00:22:02.884 "qid": 0, 00:22:02.884 "state": "enabled", 00:22:02.884 "thread": "nvmf_tgt_poll_group_000", 00:22:02.884 "listen_address": { 00:22:02.884 "trtype": "TCP", 00:22:02.884 "adrfam": "IPv4", 00:22:02.884 "traddr": "10.0.0.2", 00:22:02.884 "trsvcid": "4420" 00:22:02.884 }, 00:22:02.884 "peer_address": { 00:22:02.884 "trtype": "TCP", 00:22:02.884 "adrfam": "IPv4", 00:22:02.884 "traddr": "10.0.0.1", 00:22:02.884 "trsvcid": "48120" 00:22:02.884 }, 00:22:02.884 "auth": { 00:22:02.884 "state": "completed", 00:22:02.884 "digest": "sha512", 00:22:02.884 "dhgroup": "ffdhe4096" 00:22:02.884 } 00:22:02.884 } 00:22:02.884 ]' 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.884 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.143 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.143 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.143 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.143 11:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.078 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.334 11:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.898 00:22:04.898 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.898 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.898 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.156 { 00:22:05.156 "cntlid": 129, 00:22:05.156 "qid": 0, 00:22:05.156 "state": "enabled", 00:22:05.156 "thread": "nvmf_tgt_poll_group_000", 00:22:05.156 "listen_address": { 00:22:05.156 "trtype": "TCP", 00:22:05.156 "adrfam": "IPv4", 00:22:05.156 "traddr": "10.0.0.2", 00:22:05.156 "trsvcid": "4420" 00:22:05.156 }, 00:22:05.156 "peer_address": { 00:22:05.156 "trtype": "TCP", 00:22:05.156 "adrfam": "IPv4", 00:22:05.156 "traddr": "10.0.0.1", 00:22:05.156 "trsvcid": "48156" 00:22:05.156 }, 00:22:05.156 "auth": { 00:22:05.156 "state": "completed", 00:22:05.156 "digest": "sha512", 00:22:05.156 "dhgroup": "ffdhe6144" 00:22:05.156 } 00:22:05.156 } 00:22:05.156 ]' 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.156 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.415 11:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.352 11:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.923 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.183 00:22:07.442 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.442 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.442 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.701 { 00:22:07.701 "cntlid": 131, 00:22:07.701 "qid": 0, 00:22:07.701 "state": "enabled", 00:22:07.701 "thread": "nvmf_tgt_poll_group_000", 00:22:07.701 "listen_address": { 00:22:07.701 "trtype": "TCP", 00:22:07.701 "adrfam": "IPv4", 00:22:07.701 "traddr": "10.0.0.2", 00:22:07.701 "trsvcid": "4420" 00:22:07.701 }, 00:22:07.701 "peer_address": { 00:22:07.701 "trtype": "TCP", 00:22:07.701 "adrfam": "IPv4", 00:22:07.701 "traddr": "10.0.0.1", 00:22:07.701 "trsvcid": "48180" 00:22:07.701 }, 00:22:07.701 "auth": { 00:22:07.701 "state": "completed", 00:22:07.701 "digest": "sha512", 00:22:07.701 "dhgroup": "ffdhe6144" 00:22:07.701 } 00:22:07.701 } 00:22:07.701 ]' 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.701 11:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.959 11:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.892 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.151 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.715 00:22:09.715 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.715 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.715 11:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.715 { 00:22:09.715 "cntlid": 133, 00:22:09.715 "qid": 0, 00:22:09.715 "state": "enabled", 00:22:09.715 "thread": "nvmf_tgt_poll_group_000", 00:22:09.715 "listen_address": { 00:22:09.715 "trtype": "TCP", 00:22:09.715 "adrfam": "IPv4", 00:22:09.715 "traddr": "10.0.0.2", 00:22:09.715 "trsvcid": "4420" 00:22:09.715 }, 00:22:09.715 "peer_address": { 00:22:09.715 "trtype": "TCP", 00:22:09.715 "adrfam": "IPv4", 00:22:09.715 "traddr": "10.0.0.1", 00:22:09.715 "trsvcid": "59994" 00:22:09.715 }, 00:22:09.715 "auth": { 00:22:09.715 "state": "completed", 00:22:09.715 "digest": "sha512", 00:22:09.715 "dhgroup": "ffdhe6144" 00:22:09.715 } 00:22:09.715 } 00:22:09.715 ]' 00:22:09.715 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.973 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.234 11:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.171 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.172 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.172 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.431 11:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.431 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.431 11:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.688 00:22:11.688 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.688 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.688 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.946 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.946 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.946 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.946 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.204 { 00:22:12.204 "cntlid": 135, 00:22:12.204 "qid": 0, 00:22:12.204 "state": "enabled", 00:22:12.204 "thread": "nvmf_tgt_poll_group_000", 00:22:12.204 "listen_address": { 00:22:12.204 "trtype": "TCP", 00:22:12.204 "adrfam": "IPv4", 00:22:12.204 "traddr": "10.0.0.2", 00:22:12.204 "trsvcid": "4420" 00:22:12.204 }, 00:22:12.204 "peer_address": { 00:22:12.204 "trtype": "TCP", 00:22:12.204 "adrfam": "IPv4", 00:22:12.204 "traddr": "10.0.0.1", 00:22:12.204 "trsvcid": "60004" 00:22:12.204 }, 00:22:12.204 "auth": { 00:22:12.204 "state": "completed", 00:22:12.204 "digest": "sha512", 00:22:12.204 "dhgroup": "ffdhe6144" 00:22:12.204 } 00:22:12.204 } 00:22:12.204 ]' 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.204 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.461 11:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.395 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.654 11:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.589 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.589 { 00:22:14.589 "cntlid": 137, 00:22:14.589 "qid": 0, 00:22:14.589 "state": "enabled", 00:22:14.589 "thread": "nvmf_tgt_poll_group_000", 00:22:14.589 "listen_address": { 00:22:14.589 "trtype": "TCP", 00:22:14.589 "adrfam": "IPv4", 00:22:14.589 "traddr": "10.0.0.2", 00:22:14.589 "trsvcid": "4420" 00:22:14.589 }, 00:22:14.589 "peer_address": { 00:22:14.589 "trtype": "TCP", 00:22:14.589 "adrfam": "IPv4", 00:22:14.589 "traddr": "10.0.0.1", 00:22:14.589 "trsvcid": "60034" 00:22:14.589 }, 00:22:14.589 "auth": { 00:22:14.589 "state": "completed", 00:22:14.589 "digest": "sha512", 00:22:14.589 "dhgroup": "ffdhe8192" 00:22:14.589 } 00:22:14.589 } 00:22:14.589 ]' 00:22:14.589 11:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.847 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.104 11:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.045 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.303 11:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.236 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.236 { 00:22:17.236 "cntlid": 139, 00:22:17.236 "qid": 0, 00:22:17.236 "state": "enabled", 00:22:17.236 "thread": "nvmf_tgt_poll_group_000", 00:22:17.236 "listen_address": { 00:22:17.236 "trtype": "TCP", 00:22:17.236 "adrfam": "IPv4", 00:22:17.236 "traddr": "10.0.0.2", 00:22:17.236 "trsvcid": "4420" 00:22:17.236 }, 00:22:17.236 "peer_address": { 00:22:17.236 "trtype": "TCP", 00:22:17.236 "adrfam": "IPv4", 00:22:17.236 "traddr": "10.0.0.1", 00:22:17.236 "trsvcid": "60070" 00:22:17.236 }, 00:22:17.236 "auth": { 00:22:17.236 "state": "completed", 00:22:17.236 "digest": "sha512", 00:22:17.236 "dhgroup": "ffdhe8192" 00:22:17.236 } 00:22:17.236 } 00:22:17.236 ]' 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.236 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.493 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.493 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.493 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.493 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.493 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.751 11:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NDI0MTI3MjIyZmM5NjFmMTQ2ZmQ0NmUyNzM1NmE5NTPppMOS: --dhchap-ctrl-secret DHHC-1:02:YTY1MzYwYmZmZGNmMmI3ZDJiOWViNzQzNGRlY2RhY2Q2ZjZjYmRkZTliZTBmZmM0n16ytg==: 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.687 11:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.687 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.619 00:22:19.619 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.619 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.619 11:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.877 { 00:22:19.877 "cntlid": 141, 00:22:19.877 "qid": 0, 00:22:19.877 "state": "enabled", 00:22:19.877 "thread": "nvmf_tgt_poll_group_000", 00:22:19.877 "listen_address": { 00:22:19.877 "trtype": "TCP", 00:22:19.877 "adrfam": "IPv4", 00:22:19.877 "traddr": "10.0.0.2", 00:22:19.877 "trsvcid": "4420" 00:22:19.877 }, 00:22:19.877 "peer_address": { 00:22:19.877 "trtype": "TCP", 00:22:19.877 "adrfam": "IPv4", 00:22:19.877 "traddr": "10.0.0.1", 00:22:19.877 "trsvcid": "55494" 00:22:19.877 }, 00:22:19.877 "auth": { 00:22:19.877 "state": "completed", 00:22:19.877 "digest": "sha512", 00:22:19.877 "dhgroup": "ffdhe8192" 00:22:19.877 } 00:22:19.877 } 00:22:19.877 ]' 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.877 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.136 11:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NTZjMWZmNTMwZGNmZWRlMzk5ZmI5NzQyYzc0NDViZDFjMjZmOWVhY2U3ZmM1OTQ3DL9hyA==: --dhchap-ctrl-secret DHHC-1:01:YjUyMWFkMDMyMDI3OTQ3YzJmZWUyOGI0ZDI0YWQ1MmFOO3To: 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.073 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.332 11:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.269 00:22:22.269 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.269 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.269 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.527 { 00:22:22.527 "cntlid": 143, 00:22:22.527 "qid": 0, 00:22:22.527 "state": "enabled", 00:22:22.527 "thread": "nvmf_tgt_poll_group_000", 00:22:22.527 "listen_address": { 00:22:22.527 "trtype": "TCP", 00:22:22.527 "adrfam": "IPv4", 00:22:22.527 "traddr": "10.0.0.2", 00:22:22.527 "trsvcid": "4420" 00:22:22.527 }, 00:22:22.527 "peer_address": { 00:22:22.527 "trtype": "TCP", 00:22:22.527 "adrfam": "IPv4", 00:22:22.527 "traddr": "10.0.0.1", 00:22:22.527 "trsvcid": "55526" 00:22:22.527 }, 00:22:22.527 "auth": { 00:22:22.527 "state": "completed", 00:22:22.527 "digest": "sha512", 00:22:22.527 "dhgroup": "ffdhe8192" 00:22:22.527 } 00:22:22.527 } 00:22:22.527 ]' 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.527 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.528 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.528 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.528 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.528 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.528 11:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.787 11:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:23.724 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.289 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:24.289 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.289 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.289 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.289 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:24.289 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.290 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.290 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.290 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.290 11:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.290 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.290 11:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.226 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.226 { 00:22:25.226 "cntlid": 145, 00:22:25.226 "qid": 0, 00:22:25.226 "state": "enabled", 00:22:25.226 "thread": "nvmf_tgt_poll_group_000", 00:22:25.226 "listen_address": { 00:22:25.226 "trtype": "TCP", 00:22:25.226 "adrfam": "IPv4", 00:22:25.226 "traddr": "10.0.0.2", 00:22:25.226 "trsvcid": "4420" 00:22:25.226 }, 00:22:25.226 "peer_address": { 00:22:25.226 "trtype": "TCP", 00:22:25.226 "adrfam": "IPv4", 00:22:25.226 "traddr": "10.0.0.1", 00:22:25.226 "trsvcid": "55562" 00:22:25.226 }, 00:22:25.226 "auth": { 00:22:25.226 "state": "completed", 00:22:25.226 "digest": "sha512", 00:22:25.226 "dhgroup": "ffdhe8192" 00:22:25.226 } 00:22:25.226 } 00:22:25.226 ]' 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.226 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.486 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.486 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.486 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.744 11:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWU3ODExNWU1OWQzYTdjZjZmMWE5MGIxY2M4Yzc3NTk2ZmU2ODBiNmViNWQ0ZWE4HmPpkA==: --dhchap-ctrl-secret DHHC-1:03:YmEwZDgzOGVlZDI1MDk0NzY4YzhmNWMwZjRhZTM0ODU0YmY2OTFiOTc1MGY3ZjNjZThkZThhOTY4M2MyYWU5Zd3n+8U=: 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.684 11:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:27.252 request: 00:22:27.252 { 00:22:27.252 "name": "nvme0", 00:22:27.252 "trtype": "tcp", 00:22:27.252 "traddr": "10.0.0.2", 00:22:27.252 "adrfam": "ipv4", 00:22:27.252 "trsvcid": "4420", 00:22:27.252 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.252 "prchk_reftag": false, 00:22:27.252 "prchk_guard": false, 00:22:27.252 "hdgst": false, 00:22:27.252 "ddgst": false, 00:22:27.252 "dhchap_key": "key2", 00:22:27.252 "method": "bdev_nvme_attach_controller", 00:22:27.252 "req_id": 1 00:22:27.252 } 00:22:27.252 Got JSON-RPC error response 00:22:27.252 response: 00:22:27.252 { 00:22:27.252 "code": -5, 00:22:27.252 "message": "Input/output error" 00:22:27.252 } 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:27.252 11:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:28.191 request: 00:22:28.191 { 00:22:28.191 "name": "nvme0", 00:22:28.191 "trtype": "tcp", 00:22:28.191 "traddr": "10.0.0.2", 00:22:28.191 "adrfam": "ipv4", 00:22:28.191 "trsvcid": "4420", 00:22:28.191 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.191 "prchk_reftag": false, 00:22:28.191 "prchk_guard": false, 00:22:28.191 "hdgst": false, 00:22:28.191 "ddgst": false, 00:22:28.191 "dhchap_key": "key1", 00:22:28.191 "dhchap_ctrlr_key": "ckey2", 00:22:28.191 "method": "bdev_nvme_attach_controller", 00:22:28.191 "req_id": 1 00:22:28.191 } 00:22:28.191 Got JSON-RPC error response 00:22:28.191 response: 00:22:28.191 { 00:22:28.191 "code": -5, 00:22:28.191 "message": "Input/output error" 00:22:28.191 } 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.191 11:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.759 request: 00:22:28.759 { 00:22:28.759 "name": "nvme0", 00:22:28.759 "trtype": "tcp", 00:22:28.759 "traddr": "10.0.0.2", 00:22:28.759 "adrfam": "ipv4", 00:22:28.759 "trsvcid": "4420", 00:22:28.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.759 "prchk_reftag": false, 00:22:28.759 "prchk_guard": false, 00:22:28.759 "hdgst": false, 00:22:28.759 "ddgst": false, 00:22:28.759 "dhchap_key": "key1", 00:22:28.759 "dhchap_ctrlr_key": "ckey1", 00:22:28.759 "method": "bdev_nvme_attach_controller", 00:22:28.759 "req_id": 1 00:22:28.759 } 00:22:28.759 Got JSON-RPC error response 00:22:28.759 response: 00:22:28.759 { 00:22:28.759 "code": -5, 00:22:28.759 "message": "Input/output error" 00:22:28.759 } 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.759 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 257293 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 257293 ']' 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 257293 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 257293 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 257293' 00:22:29.019 killing process with pid 257293 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 257293 00:22:29.019 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 257293 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=279340 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 279340 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 279340 ']' 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.278 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 279340 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 279340 ']' 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.535 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.793 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.793 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:29.793 11:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:29.793 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.793 11:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.793 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:30.729 00:22:30.729 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.729 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.729 11:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.986 { 00:22:30.986 "cntlid": 1, 00:22:30.986 "qid": 0, 00:22:30.986 "state": "enabled", 00:22:30.986 "thread": "nvmf_tgt_poll_group_000", 00:22:30.986 "listen_address": { 00:22:30.986 "trtype": "TCP", 00:22:30.986 "adrfam": "IPv4", 00:22:30.986 "traddr": "10.0.0.2", 00:22:30.986 "trsvcid": "4420" 00:22:30.986 }, 00:22:30.986 "peer_address": { 00:22:30.986 "trtype": "TCP", 00:22:30.986 "adrfam": "IPv4", 00:22:30.986 "traddr": "10.0.0.1", 00:22:30.986 "trsvcid": "52720" 00:22:30.986 }, 00:22:30.986 "auth": { 00:22:30.986 "state": "completed", 00:22:30.986 "digest": "sha512", 00:22:30.986 "dhgroup": "ffdhe8192" 00:22:30.986 } 00:22:30.986 } 00:22:30.986 ]' 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.986 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.553 11:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmQ5NjA3NWMyNzY3Njc4YTg1YzYyYmZlNzM0MTBjNGZmZmE1ODg1ODM4OTU0NTlkMzIzZjQxNTg3YzM3YTMzNcLpkZk=: 00:22:32.121 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:32.381 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.641 11:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.899 request: 00:22:32.899 { 00:22:32.899 "name": "nvme0", 00:22:32.899 "trtype": "tcp", 00:22:32.899 "traddr": "10.0.0.2", 00:22:32.899 "adrfam": "ipv4", 00:22:32.899 "trsvcid": "4420", 00:22:32.899 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.899 "prchk_reftag": false, 00:22:32.899 "prchk_guard": false, 00:22:32.899 "hdgst": false, 00:22:32.899 "ddgst": false, 00:22:32.899 "dhchap_key": "key3", 00:22:32.899 "method": "bdev_nvme_attach_controller", 00:22:32.899 "req_id": 1 00:22:32.899 } 00:22:32.899 Got JSON-RPC error response 00:22:32.899 response: 00:22:32.899 { 00:22:32.899 "code": -5, 00:22:32.899 "message": "Input/output error" 00:22:32.899 } 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:32.899 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.158 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.416 request: 00:22:33.416 { 00:22:33.416 "name": "nvme0", 00:22:33.416 "trtype": "tcp", 00:22:33.416 "traddr": "10.0.0.2", 00:22:33.416 "adrfam": "ipv4", 00:22:33.416 "trsvcid": "4420", 00:22:33.416 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.416 "prchk_reftag": false, 00:22:33.416 "prchk_guard": false, 00:22:33.416 "hdgst": false, 00:22:33.416 "ddgst": false, 00:22:33.416 "dhchap_key": "key3", 00:22:33.416 "method": "bdev_nvme_attach_controller", 00:22:33.416 "req_id": 1 00:22:33.416 } 00:22:33.416 Got JSON-RPC error response 00:22:33.416 response: 00:22:33.416 { 00:22:33.416 "code": -5, 00:22:33.416 "message": "Input/output error" 00:22:33.416 } 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:33.416 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.675 11:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.935 request: 00:22:33.935 { 00:22:33.935 "name": "nvme0", 00:22:33.935 "trtype": "tcp", 00:22:33.935 "traddr": "10.0.0.2", 00:22:33.935 "adrfam": "ipv4", 00:22:33.935 "trsvcid": "4420", 00:22:33.935 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.935 "prchk_reftag": false, 00:22:33.935 "prchk_guard": false, 00:22:33.935 "hdgst": false, 00:22:33.935 "ddgst": false, 00:22:33.935 "dhchap_key": "key0", 00:22:33.935 "dhchap_ctrlr_key": "key1", 00:22:33.935 "method": "bdev_nvme_attach_controller", 00:22:33.935 "req_id": 1 00:22:33.935 } 00:22:33.935 Got JSON-RPC error response 00:22:33.935 response: 00:22:33.935 { 00:22:33.935 "code": -5, 00:22:33.935 "message": "Input/output error" 00:22:33.935 } 00:22:33.935 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.935 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.935 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.935 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.935 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:33.935 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:34.195 00:22:34.195 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:34.195 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:34.195 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.454 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.454 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.454 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 257427 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 257427 ']' 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 257427 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 257427 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 257427' 00:22:34.712 killing process with pid 257427 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 257427 00:22:34.712 11:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 257427 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.970 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.970 rmmod nvme_tcp 00:22:34.970 rmmod nvme_fabrics 00:22:34.970 rmmod nvme_keyring 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 279340 ']' 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 279340 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 279340 ']' 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 279340 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 279340 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 279340' 00:22:35.228 killing process with pid 279340 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 279340 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 279340 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.228 11:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.766 11:08:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.766 11:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.L73 /tmp/spdk.key-sha256.zq6 /tmp/spdk.key-sha384.7sx /tmp/spdk.key-sha512.PQs /tmp/spdk.key-sha512.B4L /tmp/spdk.key-sha384.9NC /tmp/spdk.key-sha256.t9n '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:37.766 00:22:37.766 real 3m3.821s 00:22:37.766 user 7m7.766s 00:22:37.766 sys 0m25.367s 00:22:37.766 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.766 11:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.766 ************************************ 00:22:37.766 END TEST nvmf_auth_target 00:22:37.766 ************************************ 00:22:37.766 11:08:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.766 11:08:51 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:37.766 11:08:51 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.766 11:08:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:37.766 11:08:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.766 11:08:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.766 ************************************ 00:22:37.766 START TEST nvmf_bdevio_no_huge 00:22:37.766 ************************************ 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.766 * Looking for test storage... 00:22:37.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.766 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.767 11:08:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.671 11:08:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.671 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.671 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.671 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.671 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.671 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:22:39.672 00:22:39.672 --- 10.0.0.2 ping statistics --- 00:22:39.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.672 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:22:39.672 00:22:39.672 --- 10.0.0.1 ping statistics --- 00:22:39.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.672 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.672 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=282093 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 282093 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 282093 ']' 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.931 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.931 [2024-07-11 11:08:54.144493] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:22:39.931 [2024-07-11 11:08:54.144590] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:39.931 [2024-07-11 11:08:54.213649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.931 [2024-07-11 11:08:54.295705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.931 [2024-07-11 11:08:54.295770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.931 [2024-07-11 11:08:54.295800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.931 [2024-07-11 11:08:54.295811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.931 [2024-07-11 11:08:54.295821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.931 [2024-07-11 11:08:54.295909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.931 [2024-07-11 11:08:54.295970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:39.931 [2024-07-11 11:08:54.296038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:39.931 [2024-07-11 11:08:54.296040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.191 [2024-07-11 11:08:54.415848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.191 Malloc0 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.191 [2024-07-11 11:08:54.453938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.191 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.191 { 00:22:40.191 "params": { 00:22:40.191 "name": "Nvme$subsystem", 00:22:40.191 "trtype": "$TEST_TRANSPORT", 00:22:40.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.191 "adrfam": "ipv4", 00:22:40.191 "trsvcid": "$NVMF_PORT", 00:22:40.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.191 "hdgst": ${hdgst:-false}, 00:22:40.191 "ddgst": ${ddgst:-false} 00:22:40.191 }, 00:22:40.191 "method": "bdev_nvme_attach_controller" 00:22:40.192 } 00:22:40.192 EOF 00:22:40.192 )") 00:22:40.192 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:40.192 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:40.192 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:40.192 11:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:40.192 "params": { 00:22:40.192 "name": "Nvme1", 00:22:40.192 "trtype": "tcp", 00:22:40.192 "traddr": "10.0.0.2", 00:22:40.192 "adrfam": "ipv4", 00:22:40.192 "trsvcid": "4420", 00:22:40.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.192 "hdgst": false, 00:22:40.192 "ddgst": false 00:22:40.192 }, 00:22:40.192 "method": "bdev_nvme_attach_controller" 00:22:40.192 }' 00:22:40.192 [2024-07-11 11:08:54.501682] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:22:40.192 [2024-07-11 11:08:54.501811] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid282117 ] 00:22:40.192 [2024-07-11 11:08:54.565733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.450 [2024-07-11 11:08:54.652229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.450 [2024-07-11 11:08:54.652275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.450 [2024-07-11 11:08:54.652279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.450 I/O targets: 00:22:40.450 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:40.450 00:22:40.450 00:22:40.450 CUnit - A unit testing framework for C - Version 2.1-3 00:22:40.450 http://cunit.sourceforge.net/ 00:22:40.450 00:22:40.450 00:22:40.450 Suite: bdevio tests on: Nvme1n1 00:22:40.450 Test: blockdev write read block ...passed 00:22:40.707 Test: blockdev write zeroes read block ...passed 00:22:40.707 Test: blockdev write zeroes read no split ...passed 00:22:40.707 Test: blockdev write zeroes read split ...passed 00:22:40.707 Test: blockdev write zeroes read split partial ...passed 00:22:40.707 Test: blockdev reset ...[2024-07-11 11:08:54.967616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.707 [2024-07-11 11:08:54.967724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a84e0 (9): Bad file descriptor 00:22:40.707 [2024-07-11 11:08:55.028045] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:40.707 passed 00:22:40.707 Test: blockdev write read 8 blocks ...passed 00:22:40.707 Test: blockdev write read size > 128k ...passed 00:22:40.707 Test: blockdev write read invalid size ...passed 00:22:40.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:40.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:40.967 Test: blockdev write read max offset ...passed 00:22:40.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:40.967 Test: blockdev writev readv 8 blocks ...passed 00:22:40.967 Test: blockdev writev readv 30 x 1block ...passed 00:22:40.967 Test: blockdev writev readv block ...passed 00:22:40.967 Test: blockdev writev readv size > 128k ...passed 00:22:40.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:40.967 Test: blockdev comparev and writev ...[2024-07-11 11:08:55.283916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.283954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.283980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.283997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.284345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.284371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.284393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.284409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.284761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.284787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.284809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.284825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.285179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.285204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.285225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:40.967 [2024-07-11 11:08:55.285241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:40.967 passed 00:22:40.967 Test: blockdev nvme passthru rw ...passed 00:22:40.967 Test: blockdev nvme passthru vendor specific ...[2024-07-11 11:08:55.368033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.967 [2024-07-11 11:08:55.368061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.368213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.967 [2024-07-11 11:08:55.368237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.368382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.967 [2024-07-11 11:08:55.368405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:40.967 [2024-07-11 11:08:55.368557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:40.967 [2024-07-11 11:08:55.368581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:40.967 passed 00:22:40.967 Test: blockdev nvme admin passthru ...passed 00:22:41.227 Test: blockdev copy ...passed 00:22:41.227 00:22:41.227 Run Summary: Type Total Ran Passed Failed Inactive 00:22:41.227 suites 1 1 n/a 0 0 00:22:41.227 tests 23 23 23 0 0 00:22:41.227 asserts 152 152 152 0 n/a 00:22:41.227 00:22:41.227 Elapsed time = 1.222 seconds 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.487 rmmod nvme_tcp 00:22:41.487 rmmod nvme_fabrics 00:22:41.487 rmmod nvme_keyring 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 282093 ']' 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 282093 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 282093 ']' 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 282093 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 282093 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 282093' 00:22:41.487 killing process with pid 282093 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 282093 00:22:41.487 11:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 282093 00:22:41.745 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.745 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.745 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.003 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.003 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.003 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.003 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.003 11:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.909 11:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.909 00:22:43.909 real 0m6.484s 00:22:43.909 user 0m10.081s 00:22:43.909 sys 0m2.557s 00:22:43.909 11:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.909 11:08:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.909 ************************************ 00:22:43.909 END TEST nvmf_bdevio_no_huge 00:22:43.909 ************************************ 00:22:43.909 11:08:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:43.909 11:08:58 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:43.909 11:08:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.909 11:08:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.909 11:08:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.909 ************************************ 00:22:43.909 START TEST nvmf_tls 00:22:43.909 ************************************ 00:22:43.909 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:43.909 * Looking for test storage... 00:22:43.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.909 11:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.909 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.910 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.169 11:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.071 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.071 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.071 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:22:46.072 00:22:46.072 --- 10.0.0.2 ping statistics --- 00:22:46.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.072 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:46.072 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:46.331 00:22:46.331 --- 10.0.0.1 ping statistics --- 00:22:46.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.331 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=284228 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 284228 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 284228 ']' 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.331 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.331 [2024-07-11 11:09:00.565882] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:22:46.331 [2024-07-11 11:09:00.565964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.331 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.331 [2024-07-11 11:09:00.633134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.331 [2024-07-11 11:09:00.722618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.331 [2024-07-11 11:09:00.722665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.331 [2024-07-11 11:09:00.722692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.331 [2024-07-11 11:09:00.722702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.331 [2024-07-11 11:09:00.722711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.331 [2024-07-11 11:09:00.722760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:46.589 11:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:46.847 true 00:22:46.847 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.847 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:47.105 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:47.105 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:47.105 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:47.363 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.363 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:47.621 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:47.621 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:47.621 11:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:47.879 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.879 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:47.879 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:47.879 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:47.879 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.879 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:48.139 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:48.139 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:48.139 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:48.398 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.398 11:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:48.658 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:48.658 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:48.658 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:48.917 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.917 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:49.175 11:09:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.MFUUnPKv7U 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.o1aLeRwNAy 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.MFUUnPKv7U 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.o1aLeRwNAy 00:22:49.432 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:49.689 11:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:49.947 11:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.MFUUnPKv7U 00:22:49.947 11:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MFUUnPKv7U 00:22:49.947 11:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.204 [2024-07-11 11:09:04.528309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.204 11:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.463 11:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.721 [2024-07-11 11:09:05.109878] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.721 [2024-07-11 11:09:05.110123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.721 11:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.979 malloc0 00:22:50.979 11:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:51.238 11:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MFUUnPKv7U 00:22:51.497 [2024-07-11 11:09:05.858186] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:51.497 11:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MFUUnPKv7U 00:22:51.497 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.712 Initializing NVMe Controllers 00:23:03.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.712 Initialization complete. Launching workers. 00:23:03.712 ======================================================== 00:23:03.712 Latency(us) 00:23:03.712 Device Information : IOPS MiB/s Average min max 00:23:03.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8652.64 33.80 7398.67 1096.33 8554.17 00:23:03.712 ======================================================== 00:23:03.712 Total : 8652.64 33.80 7398.67 1096.33 8554.17 00:23:03.712 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFUUnPKv7U 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MFUUnPKv7U' 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=286695 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.712 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 286695 /var/tmp/bdevperf.sock 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 286695 ']' 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.713 11:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.713 [2024-07-11 11:09:16.014072] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:03.713 [2024-07-11 11:09:16.014148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286695 ] 00:23:03.713 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.713 [2024-07-11 11:09:16.071262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.713 [2024-07-11 11:09:16.158727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.713 11:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.713 11:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:03.713 11:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MFUUnPKv7U 00:23:03.713 [2024-07-11 11:09:16.484844] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.713 [2024-07-11 11:09:16.484964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:03.713 TLSTESTn1 00:23:03.713 11:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:03.713 Running I/O for 10 seconds... 00:23:13.697 00:23:13.697 Latency(us) 00:23:13.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.697 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.697 Verification LBA range: start 0x0 length 0x2000 00:23:13.697 TLSTESTn1 : 10.03 3347.32 13.08 0.00 0.00 38164.01 9466.31 79614.10 00:23:13.697 =================================================================================================================== 00:23:13.697 Total : 3347.32 13.08 0.00 0.00 38164.01 9466.31 79614.10 00:23:13.697 0 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 286695 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 286695 ']' 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 286695 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 286695 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 286695' 00:23:13.697 killing process with pid 286695 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 286695 00:23:13.697 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.697 00:23:13.697 Latency(us) 00:23:13.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.697 =================================================================================================================== 00:23:13.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.697 [2024-07-11 11:09:26.788497] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:13.697 11:09:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 286695 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o1aLeRwNAy 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o1aLeRwNAy 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o1aLeRwNAy 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o1aLeRwNAy' 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=287949 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 287949 /var/tmp/bdevperf.sock 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 287949 ']' 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.697 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 [2024-07-11 11:09:27.048726] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:13.698 [2024-07-11 11:09:27.048847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287949 ] 00:23:13.698 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.698 [2024-07-11 11:09:27.109842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.698 [2024-07-11 11:09:27.195144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o1aLeRwNAy 00:23:13.698 [2024-07-11 11:09:27.510689] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.698 [2024-07-11 11:09:27.510853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.698 [2024-07-11 11:09:27.520605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:13.698 [2024-07-11 11:09:27.520690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e8ab0 (107): Transport endpoint is not connected 00:23:13.698 [2024-07-11 11:09:27.521681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e8ab0 (9): Bad file descriptor 00:23:13.698 [2024-07-11 11:09:27.522681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.698 [2024-07-11 11:09:27.522699] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:13.698 [2024-07-11 11:09:27.522732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.698 request: 00:23:13.698 { 00:23:13.698 "name": "TLSTEST", 00:23:13.698 "trtype": "tcp", 00:23:13.698 "traddr": "10.0.0.2", 00:23:13.698 "adrfam": "ipv4", 00:23:13.698 "trsvcid": "4420", 00:23:13.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.698 "prchk_reftag": false, 00:23:13.698 "prchk_guard": false, 00:23:13.698 "hdgst": false, 00:23:13.698 "ddgst": false, 00:23:13.698 "psk": "/tmp/tmp.o1aLeRwNAy", 00:23:13.698 "method": "bdev_nvme_attach_controller", 00:23:13.698 "req_id": 1 00:23:13.698 } 00:23:13.698 Got JSON-RPC error response 00:23:13.698 response: 00:23:13.698 { 00:23:13.698 "code": -5, 00:23:13.698 "message": "Input/output error" 00:23:13.698 } 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 287949 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 287949 ']' 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 287949 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 287949 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 287949' 00:23:13.698 killing process with pid 287949 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 287949 00:23:13.698 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.698 00:23:13.698 Latency(us) 00:23:13.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.698 =================================================================================================================== 00:23:13.698 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.698 [2024-07-11 11:09:27.567134] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 287949 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFUUnPKv7U 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFUUnPKv7U 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFUUnPKv7U 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MFUUnPKv7U' 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=288026 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 288026 /var/tmp/bdevperf.sock 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 288026 ']' 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.698 11:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 [2024-07-11 11:09:27.818246] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:13.698 [2024-07-11 11:09:27.818327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288026 ] 00:23:13.698 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.698 [2024-07-11 11:09:27.875312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.698 [2024-07-11 11:09:27.957896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.698 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.698 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:13.698 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.MFUUnPKv7U 00:23:13.958 [2024-07-11 11:09:28.300287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.958 [2024-07-11 11:09:28.300416] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.958 [2024-07-11 11:09:28.305431] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:13.958 [2024-07-11 11:09:28.305469] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:13.958 [2024-07-11 11:09:28.305523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:13.958 [2024-07-11 11:09:28.306129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf50ab0 (107): Transport endpoint is not connected 00:23:13.958 [2024-07-11 11:09:28.307119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf50ab0 (9): Bad file descriptor 00:23:13.958 [2024-07-11 11:09:28.308118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.958 [2024-07-11 11:09:28.308137] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:13.958 [2024-07-11 11:09:28.308170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.958 request: 00:23:13.958 { 00:23:13.958 "name": "TLSTEST", 00:23:13.958 "trtype": "tcp", 00:23:13.958 "traddr": "10.0.0.2", 00:23:13.958 "adrfam": "ipv4", 00:23:13.958 "trsvcid": "4420", 00:23:13.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.958 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:13.958 "prchk_reftag": false, 00:23:13.958 "prchk_guard": false, 00:23:13.958 "hdgst": false, 00:23:13.958 "ddgst": false, 00:23:13.958 "psk": "/tmp/tmp.MFUUnPKv7U", 00:23:13.958 "method": "bdev_nvme_attach_controller", 00:23:13.958 "req_id": 1 00:23:13.958 } 00:23:13.958 Got JSON-RPC error response 00:23:13.958 response: 00:23:13.958 { 00:23:13.958 "code": -5, 00:23:13.958 "message": "Input/output error" 00:23:13.958 } 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 288026 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 288026 ']' 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 288026 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288026 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288026' 00:23:13.958 killing process with pid 288026 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 288026 00:23:13.958 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.958 00:23:13.958 Latency(us) 00:23:13.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.958 =================================================================================================================== 00:23:13.958 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.958 [2024-07-11 11:09:28.355990] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:13.958 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 288026 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFUUnPKv7U 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFUUnPKv7U 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFUUnPKv7U 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MFUUnPKv7U' 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=288158 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 288158 /var/tmp/bdevperf.sock 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 288158 ']' 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.217 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.217 [2024-07-11 11:09:28.621324] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:14.217 [2024-07-11 11:09:28.621404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288158 ] 00:23:14.476 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.476 [2024-07-11 11:09:28.680374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.476 [2024-07-11 11:09:28.766356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.476 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.476 11:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.476 11:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MFUUnPKv7U 00:23:14.735 [2024-07-11 11:09:29.094600] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.735 [2024-07-11 11:09:29.094816] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.735 [2024-07-11 11:09:29.104666] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:14.735 [2024-07-11 11:09:29.104696] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:14.735 [2024-07-11 11:09:29.104750] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:14.735 [2024-07-11 11:09:29.105537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19edab0 (107): Transport endpoint is not connected 00:23:14.735 [2024-07-11 11:09:29.106528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19edab0 (9): Bad file descriptor 00:23:14.735 [2024-07-11 11:09:29.107528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:14.735 [2024-07-11 11:09:29.107552] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:14.735 [2024-07-11 11:09:29.107585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:14.735 request: 00:23:14.735 { 00:23:14.735 "name": "TLSTEST", 00:23:14.735 "trtype": "tcp", 00:23:14.735 "traddr": "10.0.0.2", 00:23:14.735 "adrfam": "ipv4", 00:23:14.735 "trsvcid": "4420", 00:23:14.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:14.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.735 "prchk_reftag": false, 00:23:14.735 "prchk_guard": false, 00:23:14.735 "hdgst": false, 00:23:14.735 "ddgst": false, 00:23:14.735 "psk": "/tmp/tmp.MFUUnPKv7U", 00:23:14.735 "method": "bdev_nvme_attach_controller", 00:23:14.735 "req_id": 1 00:23:14.735 } 00:23:14.735 Got JSON-RPC error response 00:23:14.735 response: 00:23:14.735 { 00:23:14.735 "code": -5, 00:23:14.735 "message": "Input/output error" 00:23:14.735 } 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 288158 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 288158 ']' 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 288158 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288158 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288158' 00:23:14.735 killing process with pid 288158 00:23:14.735 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 288158 00:23:14.994 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.994 00:23:14.994 Latency(us) 00:23:14.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.994 =================================================================================================================== 00:23:14.994 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.994 [2024-07-11 11:09:29.159766] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 288158 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=288297 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 288297 /var/tmp/bdevperf.sock 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 288297 ']' 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.994 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.994 [2024-07-11 11:09:29.412977] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:14.994 [2024-07-11 11:09:29.413056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288297 ] 00:23:15.253 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.253 [2024-07-11 11:09:29.470723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.253 [2024-07-11 11:09:29.552861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.253 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.253 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:15.253 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:15.823 [2024-07-11 11:09:29.936925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.823 [2024-07-11 11:09:29.938969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x580e60 (9): Bad file descriptor 00:23:15.823 [2024-07-11 11:09:29.939965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:15.823 [2024-07-11 11:09:29.939989] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:15.823 [2024-07-11 11:09:29.940008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:15.823 request: 00:23:15.823 { 00:23:15.823 "name": "TLSTEST", 00:23:15.823 "trtype": "tcp", 00:23:15.823 "traddr": "10.0.0.2", 00:23:15.823 "adrfam": "ipv4", 00:23:15.823 "trsvcid": "4420", 00:23:15.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.823 "prchk_reftag": false, 00:23:15.823 "prchk_guard": false, 00:23:15.823 "hdgst": false, 00:23:15.823 "ddgst": false, 00:23:15.823 "method": "bdev_nvme_attach_controller", 00:23:15.823 "req_id": 1 00:23:15.823 } 00:23:15.823 Got JSON-RPC error response 00:23:15.823 response: 00:23:15.824 { 00:23:15.824 "code": -5, 00:23:15.824 "message": "Input/output error" 00:23:15.824 } 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 288297 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 288297 ']' 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 288297 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288297 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288297' 00:23:15.824 killing process with pid 288297 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 288297 00:23:15.824 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.824 00:23:15.824 Latency(us) 00:23:15.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.824 =================================================================================================================== 00:23:15.824 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.824 11:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 288297 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 284228 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 284228 ']' 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 284228 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 284228 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 284228' 00:23:15.824 killing process with pid 284228 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 284228 00:23:15.824 [2024-07-11 11:09:30.236512] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:15.824 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 284228 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:16.116 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.05UvFswSm3 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.05UvFswSm3 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=288449 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 288449 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 288449 ']' 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.426 [2024-07-11 11:09:30.573073] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:16.426 [2024-07-11 11:09:30.573147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.426 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.426 [2024-07-11 11:09:30.634684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.426 [2024-07-11 11:09:30.717176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.426 [2024-07-11 11:09:30.717233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.426 [2024-07-11 11:09:30.717260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.426 [2024-07-11 11:09:30.717271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.426 [2024-07-11 11:09:30.717280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.426 [2024-07-11 11:09:30.717307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:16.426 11:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.715 11:09:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.715 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.05UvFswSm3 00:23:16.715 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.05UvFswSm3 00:23:16.715 11:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.715 [2024-07-11 11:09:31.067358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.715 11:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.980 11:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:17.239 [2024-07-11 11:09:31.552661] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.239 [2024-07-11 11:09:31.552916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.239 11:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:17.499 malloc0 00:23:17.499 11:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.759 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:18.017 [2024-07-11 11:09:32.293883] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.05UvFswSm3 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.05UvFswSm3' 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=288624 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 288624 /var/tmp/bdevperf.sock 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 288624 ']' 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.018 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.018 [2024-07-11 11:09:32.354818] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:18.018 [2024-07-11 11:09:32.354896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288624 ] 00:23:18.018 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.018 [2024-07-11 11:09:32.413287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.276 [2024-07-11 11:09:32.499513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.276 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.276 11:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:18.276 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:18.534 [2024-07-11 11:09:32.881758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.534 [2024-07-11 11:09:32.881894] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:18.534 TLSTESTn1 00:23:18.793 11:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:18.793 Running I/O for 10 seconds... 00:23:28.788 00:23:28.788 Latency(us) 00:23:28.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.788 Verification LBA range: start 0x0 length 0x2000 00:23:28.788 TLSTESTn1 : 10.04 2991.11 11.68 0.00 0.00 42688.88 6140.97 68351.62 00:23:28.788 =================================================================================================================== 00:23:28.788 Total : 2991.11 11.68 0.00 0.00 42688.88 6140.97 68351.62 00:23:28.788 0 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 288624 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 288624 ']' 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 288624 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288624 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:28.788 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288624' 00:23:28.788 killing process with pid 288624 00:23:28.789 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 288624 00:23:28.789 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.789 00:23:28.789 Latency(us) 00:23:28.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.789 =================================================================================================================== 00:23:28.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.789 [2024-07-11 11:09:43.203860] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.789 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 288624 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.05UvFswSm3 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.05UvFswSm3 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.05UvFswSm3 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.05UvFswSm3 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.05UvFswSm3' 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=289935 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 289935 /var/tmp/bdevperf.sock 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 289935 ']' 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.047 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.307 [2024-07-11 11:09:43.482767] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:29.307 [2024-07-11 11:09:43.482845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289935 ] 00:23:29.307 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.307 [2024-07-11 11:09:43.542904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.307 [2024-07-11 11:09:43.627981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.567 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.567 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:29.567 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:29.567 [2024-07-11 11:09:43.974854] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.567 [2024-07-11 11:09:43.974949] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:29.567 [2024-07-11 11:09:43.974965] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.05UvFswSm3 00:23:29.567 request: 00:23:29.567 { 00:23:29.567 "name": "TLSTEST", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.567 "prchk_reftag": false, 00:23:29.567 "prchk_guard": false, 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false, 00:23:29.567 "psk": "/tmp/tmp.05UvFswSm3", 00:23:29.567 "method": "bdev_nvme_attach_controller", 00:23:29.567 "req_id": 1 00:23:29.567 } 00:23:29.567 Got JSON-RPC error response 00:23:29.567 response: 00:23:29.567 { 00:23:29.567 "code": -1, 00:23:29.567 "message": "Operation not permitted" 00:23:29.567 } 00:23:29.825 11:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 289935 00:23:29.825 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 289935 ']' 00:23:29.825 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 289935 00:23:29.825 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.825 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.825 11:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 289935 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 289935' 00:23:29.825 killing process with pid 289935 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 289935 00:23:29.825 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.825 00:23:29.825 Latency(us) 00:23:29.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.825 =================================================================================================================== 00:23:29.825 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 289935 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 288449 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 288449 ']' 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 288449 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.825 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288449 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288449' 00:23:30.084 killing process with pid 288449 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 288449 00:23:30.084 [2024-07-11 11:09:44.268005] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 288449 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=290076 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 290076 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 290076 ']' 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.084 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.342 [2024-07-11 11:09:44.552581] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:30.342 [2024-07-11 11:09:44.552666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.342 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.342 [2024-07-11 11:09:44.613329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.342 [2024-07-11 11:09:44.692749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.342 [2024-07-11 11:09:44.692828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.342 [2024-07-11 11:09:44.692842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.342 [2024-07-11 11:09:44.692853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.342 [2024-07-11 11:09:44.692877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.342 [2024-07-11 11:09:44.692909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.05UvFswSm3 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.05UvFswSm3 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.05UvFswSm3 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.05UvFswSm3 00:23:30.601 11:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:30.861 [2024-07-11 11:09:45.071456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.861 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.120 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.378 [2024-07-11 11:09:45.612936] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.378 [2024-07-11 11:09:45.613186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.378 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.635 malloc0 00:23:31.635 11:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:31.893 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:32.151 [2024-07-11 11:09:46.426326] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:32.151 [2024-07-11 11:09:46.426364] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:32.152 [2024-07-11 11:09:46.426408] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:32.152 request: 00:23:32.152 { 00:23:32.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.152 "host": "nqn.2016-06.io.spdk:host1", 00:23:32.152 "psk": "/tmp/tmp.05UvFswSm3", 00:23:32.152 "method": "nvmf_subsystem_add_host", 00:23:32.152 "req_id": 1 00:23:32.152 } 00:23:32.152 Got JSON-RPC error response 00:23:32.152 response: 00:23:32.152 { 00:23:32.152 "code": -32603, 00:23:32.152 "message": "Internal error" 00:23:32.152 } 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 290076 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 290076 ']' 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 290076 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 290076 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 290076' 00:23:32.152 killing process with pid 290076 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 290076 00:23:32.152 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 290076 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.05UvFswSm3 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=290368 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 290368 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 290368 ']' 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.410 11:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.410 [2024-07-11 11:09:46.760406] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:32.410 [2024-07-11 11:09:46.760493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.410 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.410 [2024-07-11 11:09:46.821699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.668 [2024-07-11 11:09:46.900277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.668 [2024-07-11 11:09:46.900334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.668 [2024-07-11 11:09:46.900362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.668 [2024-07-11 11:09:46.900373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.668 [2024-07-11 11:09:46.900383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.668 [2024-07-11 11:09:46.900414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.05UvFswSm3 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.05UvFswSm3 00:23:32.668 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:32.926 [2024-07-11 11:09:47.252821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.926 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.183 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:33.441 [2024-07-11 11:09:47.750223] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.441 [2024-07-11 11:09:47.750470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.441 11:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.699 malloc0 00:23:33.699 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.957 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:34.217 [2024-07-11 11:09:48.478135] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=290535 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 290535 /var/tmp/bdevperf.sock 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 290535 ']' 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.217 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.217 [2024-07-11 11:09:48.540586] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:34.217 [2024-07-11 11:09:48.540651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290535 ] 00:23:34.217 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.217 [2024-07-11 11:09:48.598323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.475 [2024-07-11 11:09:48.689568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.475 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.475 11:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:34.475 11:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:34.733 [2024-07-11 11:09:49.031917] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.733 [2024-07-11 11:09:49.032046] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:34.733 TLSTESTn1 00:23:34.733 11:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:35.302 11:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:35.303 "subsystems": [ 00:23:35.303 { 00:23:35.303 "subsystem": "keyring", 00:23:35.303 "config": [] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "iobuf", 00:23:35.303 "config": [ 00:23:35.303 { 00:23:35.303 "method": "iobuf_set_options", 00:23:35.303 "params": { 00:23:35.303 "small_pool_count": 8192, 00:23:35.303 "large_pool_count": 1024, 00:23:35.303 "small_bufsize": 8192, 00:23:35.303 "large_bufsize": 135168 00:23:35.303 } 00:23:35.303 } 00:23:35.303 ] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "sock", 00:23:35.303 "config": [ 00:23:35.303 { 00:23:35.303 "method": "sock_set_default_impl", 00:23:35.303 "params": { 00:23:35.303 "impl_name": "posix" 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "sock_impl_set_options", 00:23:35.303 "params": { 00:23:35.303 "impl_name": "ssl", 00:23:35.303 "recv_buf_size": 4096, 00:23:35.303 "send_buf_size": 4096, 00:23:35.303 "enable_recv_pipe": true, 00:23:35.303 "enable_quickack": false, 00:23:35.303 "enable_placement_id": 0, 00:23:35.303 "enable_zerocopy_send_server": true, 00:23:35.303 "enable_zerocopy_send_client": false, 00:23:35.303 "zerocopy_threshold": 0, 00:23:35.303 "tls_version": 0, 00:23:35.303 "enable_ktls": false 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "sock_impl_set_options", 00:23:35.303 "params": { 00:23:35.303 "impl_name": "posix", 00:23:35.303 "recv_buf_size": 2097152, 00:23:35.303 "send_buf_size": 2097152, 00:23:35.303 "enable_recv_pipe": true, 00:23:35.303 "enable_quickack": false, 00:23:35.303 "enable_placement_id": 0, 00:23:35.303 "enable_zerocopy_send_server": true, 00:23:35.303 "enable_zerocopy_send_client": false, 00:23:35.303 "zerocopy_threshold": 0, 00:23:35.303 "tls_version": 0, 00:23:35.303 "enable_ktls": false 00:23:35.303 } 00:23:35.303 } 00:23:35.303 ] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "vmd", 00:23:35.303 "config": [] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "accel", 00:23:35.303 "config": [ 00:23:35.303 { 00:23:35.303 "method": "accel_set_options", 00:23:35.303 "params": { 00:23:35.303 "small_cache_size": 128, 00:23:35.303 "large_cache_size": 16, 00:23:35.303 "task_count": 2048, 00:23:35.303 "sequence_count": 2048, 00:23:35.303 "buf_count": 2048 00:23:35.303 } 00:23:35.303 } 00:23:35.303 ] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "bdev", 00:23:35.303 "config": [ 00:23:35.303 { 00:23:35.303 "method": "bdev_set_options", 00:23:35.303 "params": { 00:23:35.303 "bdev_io_pool_size": 65535, 00:23:35.303 "bdev_io_cache_size": 256, 00:23:35.303 "bdev_auto_examine": true, 00:23:35.303 "iobuf_small_cache_size": 128, 00:23:35.303 "iobuf_large_cache_size": 16 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "bdev_raid_set_options", 00:23:35.303 "params": { 00:23:35.303 "process_window_size_kb": 1024 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "bdev_iscsi_set_options", 00:23:35.303 "params": { 00:23:35.303 "timeout_sec": 30 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "bdev_nvme_set_options", 00:23:35.303 "params": { 00:23:35.303 "action_on_timeout": "none", 00:23:35.303 "timeout_us": 0, 00:23:35.303 "timeout_admin_us": 0, 00:23:35.303 "keep_alive_timeout_ms": 10000, 00:23:35.303 "arbitration_burst": 0, 00:23:35.303 "low_priority_weight": 0, 00:23:35.303 "medium_priority_weight": 0, 00:23:35.303 "high_priority_weight": 0, 00:23:35.303 "nvme_adminq_poll_period_us": 10000, 00:23:35.303 "nvme_ioq_poll_period_us": 0, 00:23:35.303 "io_queue_requests": 0, 00:23:35.303 "delay_cmd_submit": true, 00:23:35.303 "transport_retry_count": 4, 00:23:35.303 "bdev_retry_count": 3, 00:23:35.303 "transport_ack_timeout": 0, 00:23:35.303 "ctrlr_loss_timeout_sec": 0, 00:23:35.303 "reconnect_delay_sec": 0, 00:23:35.303 "fast_io_fail_timeout_sec": 0, 00:23:35.303 "disable_auto_failback": false, 00:23:35.303 "generate_uuids": false, 00:23:35.303 "transport_tos": 0, 00:23:35.303 "nvme_error_stat": false, 00:23:35.303 "rdma_srq_size": 0, 00:23:35.303 "io_path_stat": false, 00:23:35.303 "allow_accel_sequence": false, 00:23:35.303 "rdma_max_cq_size": 0, 00:23:35.303 "rdma_cm_event_timeout_ms": 0, 00:23:35.303 "dhchap_digests": [ 00:23:35.303 "sha256", 00:23:35.303 "sha384", 00:23:35.303 "sha512" 00:23:35.303 ], 00:23:35.303 "dhchap_dhgroups": [ 00:23:35.303 "null", 00:23:35.303 "ffdhe2048", 00:23:35.303 "ffdhe3072", 00:23:35.303 "ffdhe4096", 00:23:35.303 "ffdhe6144", 00:23:35.303 "ffdhe8192" 00:23:35.303 ] 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "bdev_nvme_set_hotplug", 00:23:35.303 "params": { 00:23:35.303 "period_us": 100000, 00:23:35.303 "enable": false 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "bdev_malloc_create", 00:23:35.303 "params": { 00:23:35.303 "name": "malloc0", 00:23:35.303 "num_blocks": 8192, 00:23:35.303 "block_size": 4096, 00:23:35.303 "physical_block_size": 4096, 00:23:35.303 "uuid": "4cfa41e4-2061-46eb-b86c-1dc254abdf63", 00:23:35.303 "optimal_io_boundary": 0 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "bdev_wait_for_examine" 00:23:35.303 } 00:23:35.303 ] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "nbd", 00:23:35.303 "config": [] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "scheduler", 00:23:35.303 "config": [ 00:23:35.303 { 00:23:35.303 "method": "framework_set_scheduler", 00:23:35.303 "params": { 00:23:35.303 "name": "static" 00:23:35.303 } 00:23:35.303 } 00:23:35.303 ] 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "subsystem": "nvmf", 00:23:35.303 "config": [ 00:23:35.303 { 00:23:35.303 "method": "nvmf_set_config", 00:23:35.303 "params": { 00:23:35.303 "discovery_filter": "match_any", 00:23:35.303 "admin_cmd_passthru": { 00:23:35.303 "identify_ctrlr": false 00:23:35.303 } 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "nvmf_set_max_subsystems", 00:23:35.303 "params": { 00:23:35.303 "max_subsystems": 1024 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "nvmf_set_crdt", 00:23:35.303 "params": { 00:23:35.303 "crdt1": 0, 00:23:35.303 "crdt2": 0, 00:23:35.303 "crdt3": 0 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "nvmf_create_transport", 00:23:35.303 "params": { 00:23:35.303 "trtype": "TCP", 00:23:35.303 "max_queue_depth": 128, 00:23:35.303 "max_io_qpairs_per_ctrlr": 127, 00:23:35.303 "in_capsule_data_size": 4096, 00:23:35.303 "max_io_size": 131072, 00:23:35.303 "io_unit_size": 131072, 00:23:35.303 "max_aq_depth": 128, 00:23:35.303 "num_shared_buffers": 511, 00:23:35.303 "buf_cache_size": 4294967295, 00:23:35.303 "dif_insert_or_strip": false, 00:23:35.303 "zcopy": false, 00:23:35.303 "c2h_success": false, 00:23:35.303 "sock_priority": 0, 00:23:35.303 "abort_timeout_sec": 1, 00:23:35.303 "ack_timeout": 0, 00:23:35.303 "data_wr_pool_size": 0 00:23:35.303 } 00:23:35.303 }, 00:23:35.303 { 00:23:35.303 "method": "nvmf_create_subsystem", 00:23:35.303 "params": { 00:23:35.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.303 "allow_any_host": false, 00:23:35.303 "serial_number": "SPDK00000000000001", 00:23:35.303 "model_number": "SPDK bdev Controller", 00:23:35.303 "max_namespaces": 10, 00:23:35.303 "min_cntlid": 1, 00:23:35.303 "max_cntlid": 65519, 00:23:35.304 "ana_reporting": false 00:23:35.304 } 00:23:35.304 }, 00:23:35.304 { 00:23:35.304 "method": "nvmf_subsystem_add_host", 00:23:35.304 "params": { 00:23:35.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.304 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.304 "psk": "/tmp/tmp.05UvFswSm3" 00:23:35.304 } 00:23:35.304 }, 00:23:35.304 { 00:23:35.304 "method": "nvmf_subsystem_add_ns", 00:23:35.304 "params": { 00:23:35.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.304 "namespace": { 00:23:35.304 "nsid": 1, 00:23:35.304 "bdev_name": "malloc0", 00:23:35.304 "nguid": "4CFA41E4206146EBB86C1DC254ABDF63", 00:23:35.304 "uuid": "4cfa41e4-2061-46eb-b86c-1dc254abdf63", 00:23:35.304 "no_auto_visible": false 00:23:35.304 } 00:23:35.304 } 00:23:35.304 }, 00:23:35.304 { 00:23:35.304 "method": "nvmf_subsystem_add_listener", 00:23:35.304 "params": { 00:23:35.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.304 "listen_address": { 00:23:35.304 "trtype": "TCP", 00:23:35.304 "adrfam": "IPv4", 00:23:35.304 "traddr": "10.0.0.2", 00:23:35.304 "trsvcid": "4420" 00:23:35.304 }, 00:23:35.304 "secure_channel": true 00:23:35.304 } 00:23:35.304 } 00:23:35.304 ] 00:23:35.304 } 00:23:35.304 ] 00:23:35.304 }' 00:23:35.304 11:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:35.562 11:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:35.562 "subsystems": [ 00:23:35.562 { 00:23:35.562 "subsystem": "keyring", 00:23:35.562 "config": [] 00:23:35.562 }, 00:23:35.562 { 00:23:35.562 "subsystem": "iobuf", 00:23:35.562 "config": [ 00:23:35.562 { 00:23:35.562 "method": "iobuf_set_options", 00:23:35.562 "params": { 00:23:35.562 "small_pool_count": 8192, 00:23:35.562 "large_pool_count": 1024, 00:23:35.562 "small_bufsize": 8192, 00:23:35.562 "large_bufsize": 135168 00:23:35.562 } 00:23:35.562 } 00:23:35.562 ] 00:23:35.562 }, 00:23:35.562 { 00:23:35.562 "subsystem": "sock", 00:23:35.562 "config": [ 00:23:35.562 { 00:23:35.562 "method": "sock_set_default_impl", 00:23:35.562 "params": { 00:23:35.562 "impl_name": "posix" 00:23:35.562 } 00:23:35.562 }, 00:23:35.562 { 00:23:35.562 "method": "sock_impl_set_options", 00:23:35.562 "params": { 00:23:35.562 "impl_name": "ssl", 00:23:35.562 "recv_buf_size": 4096, 00:23:35.562 "send_buf_size": 4096, 00:23:35.562 "enable_recv_pipe": true, 00:23:35.562 "enable_quickack": false, 00:23:35.562 "enable_placement_id": 0, 00:23:35.563 "enable_zerocopy_send_server": true, 00:23:35.563 "enable_zerocopy_send_client": false, 00:23:35.563 "zerocopy_threshold": 0, 00:23:35.563 "tls_version": 0, 00:23:35.563 "enable_ktls": false 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "sock_impl_set_options", 00:23:35.563 "params": { 00:23:35.563 "impl_name": "posix", 00:23:35.563 "recv_buf_size": 2097152, 00:23:35.563 "send_buf_size": 2097152, 00:23:35.563 "enable_recv_pipe": true, 00:23:35.563 "enable_quickack": false, 00:23:35.563 "enable_placement_id": 0, 00:23:35.563 "enable_zerocopy_send_server": true, 00:23:35.563 "enable_zerocopy_send_client": false, 00:23:35.563 "zerocopy_threshold": 0, 00:23:35.563 "tls_version": 0, 00:23:35.563 "enable_ktls": false 00:23:35.563 } 00:23:35.563 } 00:23:35.563 ] 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "subsystem": "vmd", 00:23:35.563 "config": [] 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "subsystem": "accel", 00:23:35.563 "config": [ 00:23:35.563 { 00:23:35.563 "method": "accel_set_options", 00:23:35.563 "params": { 00:23:35.563 "small_cache_size": 128, 00:23:35.563 "large_cache_size": 16, 00:23:35.563 "task_count": 2048, 00:23:35.563 "sequence_count": 2048, 00:23:35.563 "buf_count": 2048 00:23:35.563 } 00:23:35.563 } 00:23:35.563 ] 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "subsystem": "bdev", 00:23:35.563 "config": [ 00:23:35.563 { 00:23:35.563 "method": "bdev_set_options", 00:23:35.563 "params": { 00:23:35.563 "bdev_io_pool_size": 65535, 00:23:35.563 "bdev_io_cache_size": 256, 00:23:35.563 "bdev_auto_examine": true, 00:23:35.563 "iobuf_small_cache_size": 128, 00:23:35.563 "iobuf_large_cache_size": 16 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "bdev_raid_set_options", 00:23:35.563 "params": { 00:23:35.563 "process_window_size_kb": 1024 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "bdev_iscsi_set_options", 00:23:35.563 "params": { 00:23:35.563 "timeout_sec": 30 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "bdev_nvme_set_options", 00:23:35.563 "params": { 00:23:35.563 "action_on_timeout": "none", 00:23:35.563 "timeout_us": 0, 00:23:35.563 "timeout_admin_us": 0, 00:23:35.563 "keep_alive_timeout_ms": 10000, 00:23:35.563 "arbitration_burst": 0, 00:23:35.563 "low_priority_weight": 0, 00:23:35.563 "medium_priority_weight": 0, 00:23:35.563 "high_priority_weight": 0, 00:23:35.563 "nvme_adminq_poll_period_us": 10000, 00:23:35.563 "nvme_ioq_poll_period_us": 0, 00:23:35.563 "io_queue_requests": 512, 00:23:35.563 "delay_cmd_submit": true, 00:23:35.563 "transport_retry_count": 4, 00:23:35.563 "bdev_retry_count": 3, 00:23:35.563 "transport_ack_timeout": 0, 00:23:35.563 "ctrlr_loss_timeout_sec": 0, 00:23:35.563 "reconnect_delay_sec": 0, 00:23:35.563 "fast_io_fail_timeout_sec": 0, 00:23:35.563 "disable_auto_failback": false, 00:23:35.563 "generate_uuids": false, 00:23:35.563 "transport_tos": 0, 00:23:35.563 "nvme_error_stat": false, 00:23:35.563 "rdma_srq_size": 0, 00:23:35.563 "io_path_stat": false, 00:23:35.563 "allow_accel_sequence": false, 00:23:35.563 "rdma_max_cq_size": 0, 00:23:35.563 "rdma_cm_event_timeout_ms": 0, 00:23:35.563 "dhchap_digests": [ 00:23:35.563 "sha256", 00:23:35.563 "sha384", 00:23:35.563 "sha512" 00:23:35.563 ], 00:23:35.563 "dhchap_dhgroups": [ 00:23:35.563 "null", 00:23:35.563 "ffdhe2048", 00:23:35.563 "ffdhe3072", 00:23:35.563 "ffdhe4096", 00:23:35.563 "ffdhe6144", 00:23:35.563 "ffdhe8192" 00:23:35.563 ] 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "bdev_nvme_attach_controller", 00:23:35.563 "params": { 00:23:35.563 "name": "TLSTEST", 00:23:35.563 "trtype": "TCP", 00:23:35.563 "adrfam": "IPv4", 00:23:35.563 "traddr": "10.0.0.2", 00:23:35.563 "trsvcid": "4420", 00:23:35.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.563 "prchk_reftag": false, 00:23:35.563 "prchk_guard": false, 00:23:35.563 "ctrlr_loss_timeout_sec": 0, 00:23:35.563 "reconnect_delay_sec": 0, 00:23:35.563 "fast_io_fail_timeout_sec": 0, 00:23:35.563 "psk": "/tmp/tmp.05UvFswSm3", 00:23:35.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.563 "hdgst": false, 00:23:35.563 "ddgst": false 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "bdev_nvme_set_hotplug", 00:23:35.563 "params": { 00:23:35.563 "period_us": 100000, 00:23:35.563 "enable": false 00:23:35.563 } 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "method": "bdev_wait_for_examine" 00:23:35.563 } 00:23:35.563 ] 00:23:35.563 }, 00:23:35.563 { 00:23:35.563 "subsystem": "nbd", 00:23:35.563 "config": [] 00:23:35.563 } 00:23:35.563 ] 00:23:35.563 }' 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 290535 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 290535 ']' 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 290535 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 290535 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 290535' 00:23:35.563 killing process with pid 290535 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 290535 00:23:35.563 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.563 00:23:35.563 Latency(us) 00:23:35.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.563 =================================================================================================================== 00:23:35.563 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.563 [2024-07-11 11:09:49.767064] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 290535 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 290368 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 290368 ']' 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 290368 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:35.563 11:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 290368 00:23:35.822 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:35.822 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:35.822 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 290368' 00:23:35.822 killing process with pid 290368 00:23:35.822 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 290368 00:23:35.822 [2024-07-11 11:09:50.009446] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:35.822 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 290368 00:23:36.081 11:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:36.081 11:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.081 11:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:36.081 "subsystems": [ 00:23:36.081 { 00:23:36.081 "subsystem": "keyring", 00:23:36.081 "config": [] 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "subsystem": "iobuf", 00:23:36.081 "config": [ 00:23:36.081 { 00:23:36.081 "method": "iobuf_set_options", 00:23:36.081 "params": { 00:23:36.081 "small_pool_count": 8192, 00:23:36.081 "large_pool_count": 1024, 00:23:36.081 "small_bufsize": 8192, 00:23:36.081 "large_bufsize": 135168 00:23:36.081 } 00:23:36.081 } 00:23:36.081 ] 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "subsystem": "sock", 00:23:36.081 "config": [ 00:23:36.081 { 00:23:36.081 "method": "sock_set_default_impl", 00:23:36.081 "params": { 00:23:36.081 "impl_name": "posix" 00:23:36.081 } 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "method": "sock_impl_set_options", 00:23:36.081 "params": { 00:23:36.081 "impl_name": "ssl", 00:23:36.081 "recv_buf_size": 4096, 00:23:36.081 "send_buf_size": 4096, 00:23:36.081 "enable_recv_pipe": true, 00:23:36.081 "enable_quickack": false, 00:23:36.081 "enable_placement_id": 0, 00:23:36.081 "enable_zerocopy_send_server": true, 00:23:36.081 "enable_zerocopy_send_client": false, 00:23:36.081 "zerocopy_threshold": 0, 00:23:36.081 "tls_version": 0, 00:23:36.081 "enable_ktls": false 00:23:36.081 } 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "method": "sock_impl_set_options", 00:23:36.081 "params": { 00:23:36.081 "impl_name": "posix", 00:23:36.081 "recv_buf_size": 2097152, 00:23:36.081 "send_buf_size": 2097152, 00:23:36.081 "enable_recv_pipe": true, 00:23:36.081 "enable_quickack": false, 00:23:36.081 "enable_placement_id": 0, 00:23:36.081 "enable_zerocopy_send_server": true, 00:23:36.081 "enable_zerocopy_send_client": false, 00:23:36.081 "zerocopy_threshold": 0, 00:23:36.081 "tls_version": 0, 00:23:36.081 "enable_ktls": false 00:23:36.081 } 00:23:36.081 } 00:23:36.081 ] 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "subsystem": "vmd", 00:23:36.081 "config": [] 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "subsystem": "accel", 00:23:36.081 "config": [ 00:23:36.081 { 00:23:36.081 "method": "accel_set_options", 00:23:36.081 "params": { 00:23:36.081 "small_cache_size": 128, 00:23:36.081 "large_cache_size": 16, 00:23:36.081 "task_count": 2048, 00:23:36.081 "sequence_count": 2048, 00:23:36.081 "buf_count": 2048 00:23:36.081 } 00:23:36.081 } 00:23:36.081 ] 00:23:36.081 }, 00:23:36.081 { 00:23:36.081 "subsystem": "bdev", 00:23:36.081 "config": [ 00:23:36.082 { 00:23:36.082 "method": "bdev_set_options", 00:23:36.082 "params": { 00:23:36.082 "bdev_io_pool_size": 65535, 00:23:36.082 "bdev_io_cache_size": 256, 00:23:36.082 "bdev_auto_examine": true, 00:23:36.082 "iobuf_small_cache_size": 128, 00:23:36.082 "iobuf_large_cache_size": 16 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "bdev_raid_set_options", 00:23:36.082 "params": { 00:23:36.082 "process_window_size_kb": 1024 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "bdev_iscsi_set_options", 00:23:36.082 "params": { 00:23:36.082 "timeout_sec": 30 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "bdev_nvme_set_options", 00:23:36.082 "params": { 00:23:36.082 "action_on_timeout": "none", 00:23:36.082 "timeout_us": 0, 00:23:36.082 "timeout_admin_us": 0, 00:23:36.082 "keep_alive_timeout_ms": 10000, 00:23:36.082 "arbitration_burst": 0, 00:23:36.082 "low_priority_weight": 0, 00:23:36.082 "medium_priority_weight": 0, 00:23:36.082 "high_priority_weight": 0, 00:23:36.082 "nvme_adminq_poll_period_us": 10000, 00:23:36.082 "nvme_ioq_poll_period_us": 0, 00:23:36.082 "io_queue_requests": 0, 00:23:36.082 "delay_cmd_submit": true, 00:23:36.082 "transport_retry_count": 4, 00:23:36.082 "bdev_retry_count": 3, 00:23:36.082 "transport_ack_timeout": 0, 00:23:36.082 "ctrlr_loss_timeout_sec": 0, 00:23:36.082 "reconnect_delay_sec": 0, 00:23:36.082 "fast_io_fail_timeout_sec": 0, 00:23:36.082 "disable_auto_failback": false, 00:23:36.082 "generate_uuids": false, 00:23:36.082 "transport_tos": 0, 00:23:36.082 "nvme_error_stat": false, 00:23:36.082 "rdma_srq_size": 0, 00:23:36.082 "io_path_stat": false, 00:23:36.082 "allow_accel_sequence": false, 00:23:36.082 "rdma_max_cq_size": 0, 00:23:36.082 "rdma_cm_event_timeout_ms": 0, 00:23:36.082 "dhchap_digests": [ 00:23:36.082 "sha256", 00:23:36.082 "sha384", 00:23:36.082 "sha512" 00:23:36.082 ], 00:23:36.082 "dhchap_dhgroups": [ 00:23:36.082 "null", 00:23:36.082 "ffdhe2048", 00:23:36.082 "ffdhe3072", 00:23:36.082 "ffdhe4096", 00:23:36.082 "ffdhe6144", 00:23:36.082 "ffdhe8192" 00:23:36.082 ] 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "bdev_nvme_set_hotplug", 00:23:36.082 "params": { 00:23:36.082 "period_us": 100000, 00:23:36.082 "enable": false 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "bdev_malloc_create", 00:23:36.082 "params": { 00:23:36.082 "name": "malloc0", 00:23:36.082 "num_blocks": 8192, 00:23:36.082 "block_size": 4096, 00:23:36.082 "physical_block_size": 4096, 00:23:36.082 "uuid": "4cfa41e4-2061-46eb-b86c-1dc254abdf63", 00:23:36.082 "optimal_io_boundary": 0 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "bdev_wait_for_examine" 00:23:36.082 } 00:23:36.082 ] 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "subsystem": "nbd", 00:23:36.082 "config": [] 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "subsystem": "scheduler", 00:23:36.082 "config": [ 00:23:36.082 { 00:23:36.082 "method": "framework_set_scheduler", 00:23:36.082 "params": { 00:23:36.082 "name": "static" 00:23:36.082 } 00:23:36.082 } 00:23:36.082 ] 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "subsystem": "nvmf", 00:23:36.082 "config": [ 00:23:36.082 { 00:23:36.082 "method": "nvmf_set_config", 00:23:36.082 "params": { 00:23:36.082 "discovery_filter": "match_any", 00:23:36.082 "admin_cmd_passthru": { 00:23:36.082 "identify_ctrlr": false 00:23:36.082 } 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_set_max_subsystems", 00:23:36.082 "params": { 00:23:36.082 "max_subsystems": 1024 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_set_crdt", 00:23:36.082 "params": { 00:23:36.082 "crdt1": 0, 00:23:36.082 "crdt2": 0, 00:23:36.082 "crdt3": 0 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_create_transport", 00:23:36.082 "params": { 00:23:36.082 "trtype": "TCP", 00:23:36.082 "max_queue_depth": 128, 00:23:36.082 "max_io_qpairs_per_ctrlr": 127, 00:23:36.082 "in_capsule_data_size": 4096, 00:23:36.082 "max_io_size": 131072, 00:23:36.082 "io_unit_size": 131072, 00:23:36.082 "max_aq_depth": 128, 00:23:36.082 "num_shared_buffers": 511, 00:23:36.082 "buf_cache_size": 4294967295, 00:23:36.082 "dif_insert_or_strip": false, 00:23:36.082 "zcopy": false, 00:23:36.082 "c2h_success": false, 00:23:36.082 "sock_priority": 0, 00:23:36.082 "abort_timeout_sec": 1, 00:23:36.082 "ack_timeout": 0, 00:23:36.082 "data_wr_pool_size": 0 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_create_subsystem", 00:23:36.082 "params": { 00:23:36.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.082 "allow_any_host": false, 00:23:36.082 "serial_number": "SPDK00000000000001", 00:23:36.082 "model_number": "SPDK bdev Controller", 00:23:36.082 "max_namespaces": 10, 00:23:36.082 "min_cntlid": 1, 00:23:36.082 "max_cntlid": 65519, 00:23:36.082 "ana_reporting": false 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_subsystem_add_host", 00:23:36.082 "params": { 00:23:36.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.082 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.082 "psk": "/tmp/tmp.05UvFswSm3" 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_subsystem_add_ns", 00:23:36.082 "params": { 00:23:36.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.082 "namespace": { 00:23:36.082 "nsid": 1, 00:23:36.082 "bdev_name": "malloc0", 00:23:36.082 "nguid": "4CFA41E4206146EBB86C1DC254ABDF63", 00:23:36.082 "uuid": "4cfa41e4-2061-46eb-b86c-1dc254abdf63", 00:23:36.082 "no_auto_visible": false 00:23:36.082 } 00:23:36.082 } 00:23:36.082 }, 00:23:36.082 { 00:23:36.082 "method": "nvmf_subsystem_add_listener", 00:23:36.082 "params": { 00:23:36.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.082 "listen_address": { 00:23:36.082 "trtype": "TCP", 00:23:36.082 "adrfam": "IPv4", 00:23:36.082 "traddr": "10.0.0.2", 00:23:36.082 "trsvcid": "4420" 00:23:36.082 }, 00:23:36.082 "secure_channel": true 00:23:36.082 } 00:23:36.082 } 00:23:36.082 ] 00:23:36.082 } 00:23:36.082 ] 00:23:36.082 }' 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=290809 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 290809 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 290809 ']' 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.082 11:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.082 [2024-07-11 11:09:50.303959] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:36.082 [2024-07-11 11:09:50.304058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.082 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.082 [2024-07-11 11:09:50.370566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.082 [2024-07-11 11:09:50.459291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.082 [2024-07-11 11:09:50.459361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.082 [2024-07-11 11:09:50.459374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.082 [2024-07-11 11:09:50.459400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.082 [2024-07-11 11:09:50.459410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.082 [2024-07-11 11:09:50.459492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.342 [2024-07-11 11:09:50.686856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.342 [2024-07-11 11:09:50.702814] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:36.342 [2024-07-11 11:09:50.718860] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.342 [2024-07-11 11:09:50.730899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=290963 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 290963 /var/tmp/bdevperf.sock 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 290963 ']' 00:23:36.907 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.908 11:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:36.908 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.908 11:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:36.908 "subsystems": [ 00:23:36.908 { 00:23:36.908 "subsystem": "keyring", 00:23:36.908 "config": [] 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "subsystem": "iobuf", 00:23:36.908 "config": [ 00:23:36.908 { 00:23:36.908 "method": "iobuf_set_options", 00:23:36.908 "params": { 00:23:36.908 "small_pool_count": 8192, 00:23:36.908 "large_pool_count": 1024, 00:23:36.908 "small_bufsize": 8192, 00:23:36.908 "large_bufsize": 135168 00:23:36.908 } 00:23:36.908 } 00:23:36.908 ] 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "subsystem": "sock", 00:23:36.908 "config": [ 00:23:36.908 { 00:23:36.908 "method": "sock_set_default_impl", 00:23:36.908 "params": { 00:23:36.908 "impl_name": "posix" 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "sock_impl_set_options", 00:23:36.908 "params": { 00:23:36.908 "impl_name": "ssl", 00:23:36.908 "recv_buf_size": 4096, 00:23:36.908 "send_buf_size": 4096, 00:23:36.908 "enable_recv_pipe": true, 00:23:36.908 "enable_quickack": false, 00:23:36.908 "enable_placement_id": 0, 00:23:36.908 "enable_zerocopy_send_server": true, 00:23:36.908 "enable_zerocopy_send_client": false, 00:23:36.908 "zerocopy_threshold": 0, 00:23:36.908 "tls_version": 0, 00:23:36.908 "enable_ktls": false 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "sock_impl_set_options", 00:23:36.908 "params": { 00:23:36.908 "impl_name": "posix", 00:23:36.908 "recv_buf_size": 2097152, 00:23:36.908 "send_buf_size": 2097152, 00:23:36.908 "enable_recv_pipe": true, 00:23:36.908 "enable_quickack": false, 00:23:36.908 "enable_placement_id": 0, 00:23:36.908 "enable_zerocopy_send_server": true, 00:23:36.908 "enable_zerocopy_send_client": false, 00:23:36.908 "zerocopy_threshold": 0, 00:23:36.908 "tls_version": 0, 00:23:36.908 "enable_ktls": false 00:23:36.908 } 00:23:36.908 } 00:23:36.908 ] 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "subsystem": "vmd", 00:23:36.908 "config": [] 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "subsystem": "accel", 00:23:36.908 "config": [ 00:23:36.908 { 00:23:36.908 "method": "accel_set_options", 00:23:36.908 "params": { 00:23:36.908 "small_cache_size": 128, 00:23:36.908 "large_cache_size": 16, 00:23:36.908 "task_count": 2048, 00:23:36.908 "sequence_count": 2048, 00:23:36.908 "buf_count": 2048 00:23:36.908 } 00:23:36.908 } 00:23:36.908 ] 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "subsystem": "bdev", 00:23:36.908 "config": [ 00:23:36.908 { 00:23:36.908 "method": "bdev_set_options", 00:23:36.908 "params": { 00:23:36.908 "bdev_io_pool_size": 65535, 00:23:36.908 "bdev_io_cache_size": 256, 00:23:36.908 "bdev_auto_examine": true, 00:23:36.908 "iobuf_small_cache_size": 128, 00:23:36.908 "iobuf_large_cache_size": 16 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "bdev_raid_set_options", 00:23:36.908 "params": { 00:23:36.908 "process_window_size_kb": 1024 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "bdev_iscsi_set_options", 00:23:36.908 "params": { 00:23:36.908 "timeout_sec": 30 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "bdev_nvme_set_options", 00:23:36.908 "params": { 00:23:36.908 "action_on_timeout": "none", 00:23:36.908 "timeout_us": 0, 00:23:36.908 "timeout_admin_us": 0, 00:23:36.908 "keep_alive_timeout_ms": 10000, 00:23:36.908 "arbitration_burst": 0, 00:23:36.908 "low_priority_weight": 0, 00:23:36.908 "medium_priority_weight": 0, 00:23:36.908 "high_priority_weight": 0, 00:23:36.908 "nvme_adminq_poll_period_us": 10000, 00:23:36.908 "nvme_ioq_poll_period_us": 0, 00:23:36.908 "io_queue_requests": 512, 00:23:36.908 "delay_cmd_submit": true, 00:23:36.908 "transport_retry_count": 4, 00:23:36.908 "bdev_retry_count": 3, 00:23:36.908 "transport_ack_timeout": 0, 00:23:36.908 "ctrlr_loss_timeout_sec": 0, 00:23:36.908 "reconnect_delay_sec": 0, 00:23:36.908 "fast_io_fail_timeout_sec": 0, 00:23:36.908 "disable_auto_failback": false, 00:23:36.908 "generate_uuids": false, 00:23:36.908 "transport_tos": 0, 00:23:36.908 "nvme_error_stat": false, 00:23:36.908 "rdma_srq_size": 0, 00:23:36.908 "io_path_stat": false, 00:23:36.908 "allow_accel_sequence": false, 00:23:36.908 "rdma_max_cq_size": 0, 00:23:36.908 "rdma_cm_event_timeout_ms": 0, 00:23:36.908 "dhchap_digests": [ 00:23:36.908 "sha256", 00:23:36.908 "sha384", 00:23:36.908 "sha512" 00:23:36.908 ], 00:23:36.908 "dhchap_dhgroups": [ 00:23:36.908 "null", 00:23:36.908 "ffdhe2048", 00:23:36.908 "ffdhe3072", 00:23:36.908 "ffdhe4096", 00:23:36.908 "ffdhe6144", 00:23:36.908 "ffdhe8192" 00:23:36.908 ] 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "bdev_nvme_attach_controller", 00:23:36.908 "params": { 00:23:36.908 "name": "TLSTEST", 00:23:36.908 "trtype": "TCP", 00:23:36.908 "adrfam": "IPv4", 00:23:36.908 "traddr": "10.0.0.2", 00:23:36.908 "trsvcid": "4420", 00:23:36.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.908 "prchk_reftag": false, 00:23:36.908 "prchk_guard": false, 00:23:36.908 "ctrlr_loss_timeout_sec": 0, 00:23:36.908 "reconnect_delay_sec": 0, 00:23:36.908 "fast_io_fail_timeout_sec": 0, 00:23:36.908 "psk": "/tmp/tmp.05UvFswSm3", 00:23:36.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.908 "hdgst": false, 00:23:36.908 "ddgst": false 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "bdev_nvme_set_hotplug", 00:23:36.908 "params": { 00:23:36.908 "period_us": 100000, 00:23:36.908 "enable": false 00:23:36.908 } 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "method": "bdev_wait_for_examine" 00:23:36.908 } 00:23:36.908 ] 00:23:36.908 }, 00:23:36.908 { 00:23:36.908 "subsystem": "nbd", 00:23:36.908 "config": [] 00:23:36.908 } 00:23:36.908 ] 00:23:36.908 }' 00:23:36.908 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.908 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.908 11:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.167 [2024-07-11 11:09:51.343524] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:37.167 [2024-07-11 11:09:51.343599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290963 ] 00:23:37.167 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.167 [2024-07-11 11:09:51.400528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.167 [2024-07-11 11:09:51.483787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.426 [2024-07-11 11:09:51.652264] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.426 [2024-07-11 11:09:51.652438] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:37.992 11:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.992 11:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:37.992 11:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:37.992 Running I/O for 10 seconds... 00:23:50.202 00:23:50.202 Latency(us) 00:23:50.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.202 Verification LBA range: start 0x0 length 0x2000 00:23:50.202 TLSTESTn1 : 10.03 3054.36 11.93 0.00 0.00 41819.47 8932.31 56312.41 00:23:50.202 =================================================================================================================== 00:23:50.202 Total : 3054.36 11.93 0.00 0.00 41819.47 8932.31 56312.41 00:23:50.202 0 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 290963 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 290963 ']' 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 290963 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 290963 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 290963' 00:23:50.202 killing process with pid 290963 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 290963 00:23:50.202 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.202 00:23:50.202 Latency(us) 00:23:50.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.202 =================================================================================================================== 00:23:50.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.202 [2024-07-11 11:10:02.512528] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 290963 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 290809 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 290809 ']' 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 290809 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 290809 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 290809' 00:23:50.202 killing process with pid 290809 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 290809 00:23:50.202 [2024-07-11 11:10:02.760670] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 290809 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.202 11:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=292284 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 292284 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 292284 ']' 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.202 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.202 [2024-07-11 11:10:03.050055] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:50.202 [2024-07-11 11:10:03.050148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.202 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.202 [2024-07-11 11:10:03.111855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.202 [2024-07-11 11:10:03.188685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.202 [2024-07-11 11:10:03.188744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.202 [2024-07-11 11:10:03.188779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.202 [2024-07-11 11:10:03.188790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.203 [2024-07-11 11:10:03.188799] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.203 [2024-07-11 11:10:03.188824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.05UvFswSm3 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.05UvFswSm3 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:50.203 [2024-07-11 11:10:03.549208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:50.203 11:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:50.203 [2024-07-11 11:10:04.038462] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.203 [2024-07-11 11:10:04.038689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.203 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:50.203 malloc0 00:23:50.203 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:50.203 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.05UvFswSm3 00:23:50.460 [2024-07-11 11:10:04.754409] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=292468 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 292468 /var/tmp/bdevperf.sock 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 292468 ']' 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.461 11:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.461 [2024-07-11 11:10:04.816603] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:50.461 [2024-07-11 11:10:04.816680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292468 ] 00:23:50.461 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.461 [2024-07-11 11:10:04.879535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.718 [2024-07-11 11:10:04.970244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.718 11:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.718 11:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:50.718 11:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.05UvFswSm3 00:23:50.975 11:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:51.233 [2024-07-11 11:10:05.541885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.233 nvme0n1 00:23:51.233 11:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.491 Running I/O for 1 seconds... 00:23:52.428 00:23:52.428 Latency(us) 00:23:52.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.428 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:52.428 Verification LBA range: start 0x0 length 0x2000 00:23:52.428 nvme0n1 : 1.02 3442.39 13.45 0.00 0.00 36855.42 6359.42 45049.93 00:23:52.428 =================================================================================================================== 00:23:52.428 Total : 3442.39 13.45 0.00 0.00 36855.42 6359.42 45049.93 00:23:52.428 0 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 292468 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 292468 ']' 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 292468 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 292468 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 292468' 00:23:52.428 killing process with pid 292468 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 292468 00:23:52.428 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.428 00:23:52.428 Latency(us) 00:23:52.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.428 =================================================================================================================== 00:23:52.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.428 11:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 292468 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 292284 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 292284 ']' 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 292284 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 292284 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 292284' 00:23:52.687 killing process with pid 292284 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 292284 00:23:52.687 [2024-07-11 11:10:07.041159] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:52.687 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 292284 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=292847 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 292847 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 292847 ']' 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.945 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.945 [2024-07-11 11:10:07.300308] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:52.945 [2024-07-11 11:10:07.300386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.945 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.945 [2024-07-11 11:10:07.363442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.203 [2024-07-11 11:10:07.447740] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.203 [2024-07-11 11:10:07.447817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.203 [2024-07-11 11:10:07.447845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.203 [2024-07-11 11:10:07.447856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.203 [2024-07-11 11:10:07.447865] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.203 [2024-07-11 11:10:07.447897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.203 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.203 [2024-07-11 11:10:07.569611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.203 malloc0 00:23:53.203 [2024-07-11 11:10:07.600010] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.203 [2024-07-11 11:10:07.600249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=292873 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 292873 /var/tmp/bdevperf.sock 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 292873 ']' 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.462 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.462 [2024-07-11 11:10:07.666304] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:53.462 [2024-07-11 11:10:07.666379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292873 ] 00:23:53.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.462 [2024-07-11 11:10:07.723301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.462 [2024-07-11 11:10:07.806149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.721 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.721 11:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:53.721 11:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.05UvFswSm3 00:23:53.980 11:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:53.980 [2024-07-11 11:10:08.394158] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.240 nvme0n1 00:23:54.240 11:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.240 Running I/O for 1 seconds... 00:23:55.617 00:23:55.617 Latency(us) 00:23:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.617 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:55.617 Verification LBA range: start 0x0 length 0x2000 00:23:55.617 nvme0n1 : 1.03 3060.30 11.95 0.00 0.00 41214.80 6189.51 37476.88 00:23:55.617 =================================================================================================================== 00:23:55.617 Total : 3060.30 11.95 0.00 0.00 41214.80 6189.51 37476.88 00:23:55.617 0 00:23:55.617 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:55.617 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.617 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.617 11:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.617 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:55.617 "subsystems": [ 00:23:55.617 { 00:23:55.617 "subsystem": "keyring", 00:23:55.617 "config": [ 00:23:55.617 { 00:23:55.617 "method": "keyring_file_add_key", 00:23:55.617 "params": { 00:23:55.617 "name": "key0", 00:23:55.617 "path": "/tmp/tmp.05UvFswSm3" 00:23:55.617 } 00:23:55.617 } 00:23:55.617 ] 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "subsystem": "iobuf", 00:23:55.617 "config": [ 00:23:55.617 { 00:23:55.617 "method": "iobuf_set_options", 00:23:55.617 "params": { 00:23:55.617 "small_pool_count": 8192, 00:23:55.617 "large_pool_count": 1024, 00:23:55.617 "small_bufsize": 8192, 00:23:55.617 "large_bufsize": 135168 00:23:55.617 } 00:23:55.617 } 00:23:55.617 ] 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "subsystem": "sock", 00:23:55.617 "config": [ 00:23:55.617 { 00:23:55.617 "method": "sock_set_default_impl", 00:23:55.617 "params": { 00:23:55.617 "impl_name": "posix" 00:23:55.617 } 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "method": "sock_impl_set_options", 00:23:55.617 "params": { 00:23:55.617 "impl_name": "ssl", 00:23:55.617 "recv_buf_size": 4096, 00:23:55.617 "send_buf_size": 4096, 00:23:55.617 "enable_recv_pipe": true, 00:23:55.617 "enable_quickack": false, 00:23:55.617 "enable_placement_id": 0, 00:23:55.617 "enable_zerocopy_send_server": true, 00:23:55.617 "enable_zerocopy_send_client": false, 00:23:55.617 "zerocopy_threshold": 0, 00:23:55.617 "tls_version": 0, 00:23:55.617 "enable_ktls": false 00:23:55.617 } 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "method": "sock_impl_set_options", 00:23:55.617 "params": { 00:23:55.617 "impl_name": "posix", 00:23:55.617 "recv_buf_size": 2097152, 00:23:55.617 "send_buf_size": 2097152, 00:23:55.617 "enable_recv_pipe": true, 00:23:55.617 "enable_quickack": false, 00:23:55.617 "enable_placement_id": 0, 00:23:55.617 "enable_zerocopy_send_server": true, 00:23:55.617 "enable_zerocopy_send_client": false, 00:23:55.617 "zerocopy_threshold": 0, 00:23:55.617 "tls_version": 0, 00:23:55.617 "enable_ktls": false 00:23:55.617 } 00:23:55.617 } 00:23:55.617 ] 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "subsystem": "vmd", 00:23:55.617 "config": [] 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "subsystem": "accel", 00:23:55.617 "config": [ 00:23:55.617 { 00:23:55.617 "method": "accel_set_options", 00:23:55.617 "params": { 00:23:55.617 "small_cache_size": 128, 00:23:55.617 "large_cache_size": 16, 00:23:55.617 "task_count": 2048, 00:23:55.617 "sequence_count": 2048, 00:23:55.617 "buf_count": 2048 00:23:55.617 } 00:23:55.617 } 00:23:55.617 ] 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "subsystem": "bdev", 00:23:55.617 "config": [ 00:23:55.617 { 00:23:55.617 "method": "bdev_set_options", 00:23:55.617 "params": { 00:23:55.617 "bdev_io_pool_size": 65535, 00:23:55.617 "bdev_io_cache_size": 256, 00:23:55.617 "bdev_auto_examine": true, 00:23:55.617 "iobuf_small_cache_size": 128, 00:23:55.617 "iobuf_large_cache_size": 16 00:23:55.617 } 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "method": "bdev_raid_set_options", 00:23:55.617 "params": { 00:23:55.617 "process_window_size_kb": 1024 00:23:55.617 } 00:23:55.617 }, 00:23:55.617 { 00:23:55.617 "method": "bdev_iscsi_set_options", 00:23:55.617 "params": { 00:23:55.617 "timeout_sec": 30 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "bdev_nvme_set_options", 00:23:55.618 "params": { 00:23:55.618 "action_on_timeout": "none", 00:23:55.618 "timeout_us": 0, 00:23:55.618 "timeout_admin_us": 0, 00:23:55.618 "keep_alive_timeout_ms": 10000, 00:23:55.618 "arbitration_burst": 0, 00:23:55.618 "low_priority_weight": 0, 00:23:55.618 "medium_priority_weight": 0, 00:23:55.618 "high_priority_weight": 0, 00:23:55.618 "nvme_adminq_poll_period_us": 10000, 00:23:55.618 "nvme_ioq_poll_period_us": 0, 00:23:55.618 "io_queue_requests": 0, 00:23:55.618 "delay_cmd_submit": true, 00:23:55.618 "transport_retry_count": 4, 00:23:55.618 "bdev_retry_count": 3, 00:23:55.618 "transport_ack_timeout": 0, 00:23:55.618 "ctrlr_loss_timeout_sec": 0, 00:23:55.618 "reconnect_delay_sec": 0, 00:23:55.618 "fast_io_fail_timeout_sec": 0, 00:23:55.618 "disable_auto_failback": false, 00:23:55.618 "generate_uuids": false, 00:23:55.618 "transport_tos": 0, 00:23:55.618 "nvme_error_stat": false, 00:23:55.618 "rdma_srq_size": 0, 00:23:55.618 "io_path_stat": false, 00:23:55.618 "allow_accel_sequence": false, 00:23:55.618 "rdma_max_cq_size": 0, 00:23:55.618 "rdma_cm_event_timeout_ms": 0, 00:23:55.618 "dhchap_digests": [ 00:23:55.618 "sha256", 00:23:55.618 "sha384", 00:23:55.618 "sha512" 00:23:55.618 ], 00:23:55.618 "dhchap_dhgroups": [ 00:23:55.618 "null", 00:23:55.618 "ffdhe2048", 00:23:55.618 "ffdhe3072", 00:23:55.618 "ffdhe4096", 00:23:55.618 "ffdhe6144", 00:23:55.618 "ffdhe8192" 00:23:55.618 ] 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "bdev_nvme_set_hotplug", 00:23:55.618 "params": { 00:23:55.618 "period_us": 100000, 00:23:55.618 "enable": false 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "bdev_malloc_create", 00:23:55.618 "params": { 00:23:55.618 "name": "malloc0", 00:23:55.618 "num_blocks": 8192, 00:23:55.618 "block_size": 4096, 00:23:55.618 "physical_block_size": 4096, 00:23:55.618 "uuid": "8ab2aa61-4dd1-4079-862b-483a43178e04", 00:23:55.618 "optimal_io_boundary": 0 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "bdev_wait_for_examine" 00:23:55.618 } 00:23:55.618 ] 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "subsystem": "nbd", 00:23:55.618 "config": [] 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "subsystem": "scheduler", 00:23:55.618 "config": [ 00:23:55.618 { 00:23:55.618 "method": "framework_set_scheduler", 00:23:55.618 "params": { 00:23:55.618 "name": "static" 00:23:55.618 } 00:23:55.618 } 00:23:55.618 ] 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "subsystem": "nvmf", 00:23:55.618 "config": [ 00:23:55.618 { 00:23:55.618 "method": "nvmf_set_config", 00:23:55.618 "params": { 00:23:55.618 "discovery_filter": "match_any", 00:23:55.618 "admin_cmd_passthru": { 00:23:55.618 "identify_ctrlr": false 00:23:55.618 } 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_set_max_subsystems", 00:23:55.618 "params": { 00:23:55.618 "max_subsystems": 1024 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_set_crdt", 00:23:55.618 "params": { 00:23:55.618 "crdt1": 0, 00:23:55.618 "crdt2": 0, 00:23:55.618 "crdt3": 0 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_create_transport", 00:23:55.618 "params": { 00:23:55.618 "trtype": "TCP", 00:23:55.618 "max_queue_depth": 128, 00:23:55.618 "max_io_qpairs_per_ctrlr": 127, 00:23:55.618 "in_capsule_data_size": 4096, 00:23:55.618 "max_io_size": 131072, 00:23:55.618 "io_unit_size": 131072, 00:23:55.618 "max_aq_depth": 128, 00:23:55.618 "num_shared_buffers": 511, 00:23:55.618 "buf_cache_size": 4294967295, 00:23:55.618 "dif_insert_or_strip": false, 00:23:55.618 "zcopy": false, 00:23:55.618 "c2h_success": false, 00:23:55.618 "sock_priority": 0, 00:23:55.618 "abort_timeout_sec": 1, 00:23:55.618 "ack_timeout": 0, 00:23:55.618 "data_wr_pool_size": 0 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_create_subsystem", 00:23:55.618 "params": { 00:23:55.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.618 "allow_any_host": false, 00:23:55.618 "serial_number": "00000000000000000000", 00:23:55.618 "model_number": "SPDK bdev Controller", 00:23:55.618 "max_namespaces": 32, 00:23:55.618 "min_cntlid": 1, 00:23:55.618 "max_cntlid": 65519, 00:23:55.618 "ana_reporting": false 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_subsystem_add_host", 00:23:55.618 "params": { 00:23:55.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.618 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.618 "psk": "key0" 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_subsystem_add_ns", 00:23:55.618 "params": { 00:23:55.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.618 "namespace": { 00:23:55.618 "nsid": 1, 00:23:55.618 "bdev_name": "malloc0", 00:23:55.618 "nguid": "8AB2AA614DD14079862B483A43178E04", 00:23:55.618 "uuid": "8ab2aa61-4dd1-4079-862b-483a43178e04", 00:23:55.618 "no_auto_visible": false 00:23:55.618 } 00:23:55.618 } 00:23:55.618 }, 00:23:55.618 { 00:23:55.618 "method": "nvmf_subsystem_add_listener", 00:23:55.618 "params": { 00:23:55.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.618 "listen_address": { 00:23:55.618 "trtype": "TCP", 00:23:55.618 "adrfam": "IPv4", 00:23:55.618 "traddr": "10.0.0.2", 00:23:55.618 "trsvcid": "4420" 00:23:55.618 }, 00:23:55.618 "secure_channel": true 00:23:55.618 } 00:23:55.618 } 00:23:55.618 ] 00:23:55.618 } 00:23:55.618 ] 00:23:55.618 }' 00:23:55.618 11:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:55.876 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:55.876 "subsystems": [ 00:23:55.876 { 00:23:55.876 "subsystem": "keyring", 00:23:55.876 "config": [ 00:23:55.876 { 00:23:55.876 "method": "keyring_file_add_key", 00:23:55.876 "params": { 00:23:55.876 "name": "key0", 00:23:55.876 "path": "/tmp/tmp.05UvFswSm3" 00:23:55.876 } 00:23:55.876 } 00:23:55.876 ] 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "subsystem": "iobuf", 00:23:55.876 "config": [ 00:23:55.876 { 00:23:55.876 "method": "iobuf_set_options", 00:23:55.876 "params": { 00:23:55.876 "small_pool_count": 8192, 00:23:55.876 "large_pool_count": 1024, 00:23:55.876 "small_bufsize": 8192, 00:23:55.876 "large_bufsize": 135168 00:23:55.876 } 00:23:55.876 } 00:23:55.876 ] 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "subsystem": "sock", 00:23:55.876 "config": [ 00:23:55.876 { 00:23:55.876 "method": "sock_set_default_impl", 00:23:55.876 "params": { 00:23:55.876 "impl_name": "posix" 00:23:55.876 } 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "method": "sock_impl_set_options", 00:23:55.876 "params": { 00:23:55.876 "impl_name": "ssl", 00:23:55.876 "recv_buf_size": 4096, 00:23:55.876 "send_buf_size": 4096, 00:23:55.876 "enable_recv_pipe": true, 00:23:55.876 "enable_quickack": false, 00:23:55.876 "enable_placement_id": 0, 00:23:55.876 "enable_zerocopy_send_server": true, 00:23:55.876 "enable_zerocopy_send_client": false, 00:23:55.876 "zerocopy_threshold": 0, 00:23:55.876 "tls_version": 0, 00:23:55.876 "enable_ktls": false 00:23:55.876 } 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "method": "sock_impl_set_options", 00:23:55.876 "params": { 00:23:55.876 "impl_name": "posix", 00:23:55.876 "recv_buf_size": 2097152, 00:23:55.876 "send_buf_size": 2097152, 00:23:55.876 "enable_recv_pipe": true, 00:23:55.876 "enable_quickack": false, 00:23:55.876 "enable_placement_id": 0, 00:23:55.876 "enable_zerocopy_send_server": true, 00:23:55.876 "enable_zerocopy_send_client": false, 00:23:55.876 "zerocopy_threshold": 0, 00:23:55.876 "tls_version": 0, 00:23:55.876 "enable_ktls": false 00:23:55.876 } 00:23:55.876 } 00:23:55.876 ] 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "subsystem": "vmd", 00:23:55.876 "config": [] 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "subsystem": "accel", 00:23:55.876 "config": [ 00:23:55.876 { 00:23:55.876 "method": "accel_set_options", 00:23:55.876 "params": { 00:23:55.876 "small_cache_size": 128, 00:23:55.876 "large_cache_size": 16, 00:23:55.876 "task_count": 2048, 00:23:55.876 "sequence_count": 2048, 00:23:55.876 "buf_count": 2048 00:23:55.876 } 00:23:55.876 } 00:23:55.876 ] 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "subsystem": "bdev", 00:23:55.876 "config": [ 00:23:55.876 { 00:23:55.876 "method": "bdev_set_options", 00:23:55.876 "params": { 00:23:55.876 "bdev_io_pool_size": 65535, 00:23:55.876 "bdev_io_cache_size": 256, 00:23:55.876 "bdev_auto_examine": true, 00:23:55.876 "iobuf_small_cache_size": 128, 00:23:55.876 "iobuf_large_cache_size": 16 00:23:55.876 } 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "method": "bdev_raid_set_options", 00:23:55.876 "params": { 00:23:55.876 "process_window_size_kb": 1024 00:23:55.876 } 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "method": "bdev_iscsi_set_options", 00:23:55.876 "params": { 00:23:55.876 "timeout_sec": 30 00:23:55.876 } 00:23:55.876 }, 00:23:55.876 { 00:23:55.876 "method": "bdev_nvme_set_options", 00:23:55.876 "params": { 00:23:55.876 "action_on_timeout": "none", 00:23:55.876 "timeout_us": 0, 00:23:55.876 "timeout_admin_us": 0, 00:23:55.876 "keep_alive_timeout_ms": 10000, 00:23:55.876 "arbitration_burst": 0, 00:23:55.876 "low_priority_weight": 0, 00:23:55.876 "medium_priority_weight": 0, 00:23:55.876 "high_priority_weight": 0, 00:23:55.876 "nvme_adminq_poll_period_us": 10000, 00:23:55.876 "nvme_ioq_poll_period_us": 0, 00:23:55.876 "io_queue_requests": 512, 00:23:55.876 "delay_cmd_submit": true, 00:23:55.876 "transport_retry_count": 4, 00:23:55.876 "bdev_retry_count": 3, 00:23:55.876 "transport_ack_timeout": 0, 00:23:55.876 "ctrlr_loss_timeout_sec": 0, 00:23:55.876 "reconnect_delay_sec": 0, 00:23:55.876 "fast_io_fail_timeout_sec": 0, 00:23:55.876 "disable_auto_failback": false, 00:23:55.876 "generate_uuids": false, 00:23:55.876 "transport_tos": 0, 00:23:55.876 "nvme_error_stat": false, 00:23:55.876 "rdma_srq_size": 0, 00:23:55.876 "io_path_stat": false, 00:23:55.876 "allow_accel_sequence": false, 00:23:55.876 "rdma_max_cq_size": 0, 00:23:55.876 "rdma_cm_event_timeout_ms": 0, 00:23:55.877 "dhchap_digests": [ 00:23:55.877 "sha256", 00:23:55.877 "sha384", 00:23:55.877 "sha512" 00:23:55.877 ], 00:23:55.877 "dhchap_dhgroups": [ 00:23:55.877 "null", 00:23:55.877 "ffdhe2048", 00:23:55.877 "ffdhe3072", 00:23:55.877 "ffdhe4096", 00:23:55.877 "ffdhe6144", 00:23:55.877 "ffdhe8192" 00:23:55.877 ] 00:23:55.877 } 00:23:55.877 }, 00:23:55.877 { 00:23:55.877 "method": "bdev_nvme_attach_controller", 00:23:55.877 "params": { 00:23:55.877 "name": "nvme0", 00:23:55.877 "trtype": "TCP", 00:23:55.877 "adrfam": "IPv4", 00:23:55.877 "traddr": "10.0.0.2", 00:23:55.877 "trsvcid": "4420", 00:23:55.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.877 "prchk_reftag": false, 00:23:55.877 "prchk_guard": false, 00:23:55.877 "ctrlr_loss_timeout_sec": 0, 00:23:55.877 "reconnect_delay_sec": 0, 00:23:55.877 "fast_io_fail_timeout_sec": 0, 00:23:55.877 "psk": "key0", 00:23:55.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.877 "hdgst": false, 00:23:55.877 "ddgst": false 00:23:55.877 } 00:23:55.877 }, 00:23:55.877 { 00:23:55.877 "method": "bdev_nvme_set_hotplug", 00:23:55.877 "params": { 00:23:55.877 "period_us": 100000, 00:23:55.877 "enable": false 00:23:55.877 } 00:23:55.877 }, 00:23:55.877 { 00:23:55.877 "method": "bdev_enable_histogram", 00:23:55.877 "params": { 00:23:55.877 "name": "nvme0n1", 00:23:55.877 "enable": true 00:23:55.877 } 00:23:55.877 }, 00:23:55.877 { 00:23:55.877 "method": "bdev_wait_for_examine" 00:23:55.877 } 00:23:55.877 ] 00:23:55.877 }, 00:23:55.877 { 00:23:55.877 "subsystem": "nbd", 00:23:55.877 "config": [] 00:23:55.877 } 00:23:55.877 ] 00:23:55.877 }' 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 292873 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 292873 ']' 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 292873 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 292873 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 292873' 00:23:55.877 killing process with pid 292873 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 292873 00:23:55.877 Received shutdown signal, test time was about 1.000000 seconds 00:23:55.877 00:23:55.877 Latency(us) 00:23:55.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.877 =================================================================================================================== 00:23:55.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.877 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 292873 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 292847 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 292847 ']' 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 292847 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 292847 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 292847' 00:23:56.137 killing process with pid 292847 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 292847 00:23:56.137 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 292847 00:23:56.396 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:56.396 11:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.396 11:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:56.396 "subsystems": [ 00:23:56.396 { 00:23:56.396 "subsystem": "keyring", 00:23:56.396 "config": [ 00:23:56.396 { 00:23:56.396 "method": "keyring_file_add_key", 00:23:56.396 "params": { 00:23:56.396 "name": "key0", 00:23:56.396 "path": "/tmp/tmp.05UvFswSm3" 00:23:56.396 } 00:23:56.396 } 00:23:56.396 ] 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "subsystem": "iobuf", 00:23:56.396 "config": [ 00:23:56.396 { 00:23:56.396 "method": "iobuf_set_options", 00:23:56.396 "params": { 00:23:56.396 "small_pool_count": 8192, 00:23:56.396 "large_pool_count": 1024, 00:23:56.396 "small_bufsize": 8192, 00:23:56.396 "large_bufsize": 135168 00:23:56.396 } 00:23:56.396 } 00:23:56.396 ] 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "subsystem": "sock", 00:23:56.396 "config": [ 00:23:56.396 { 00:23:56.396 "method": "sock_set_default_impl", 00:23:56.396 "params": { 00:23:56.396 "impl_name": "posix" 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "sock_impl_set_options", 00:23:56.396 "params": { 00:23:56.396 "impl_name": "ssl", 00:23:56.396 "recv_buf_size": 4096, 00:23:56.396 "send_buf_size": 4096, 00:23:56.396 "enable_recv_pipe": true, 00:23:56.396 "enable_quickack": false, 00:23:56.396 "enable_placement_id": 0, 00:23:56.396 "enable_zerocopy_send_server": true, 00:23:56.396 "enable_zerocopy_send_client": false, 00:23:56.396 "zerocopy_threshold": 0, 00:23:56.396 "tls_version": 0, 00:23:56.396 "enable_ktls": false 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "sock_impl_set_options", 00:23:56.396 "params": { 00:23:56.396 "impl_name": "posix", 00:23:56.396 "recv_buf_size": 2097152, 00:23:56.396 "send_buf_size": 2097152, 00:23:56.396 "enable_recv_pipe": true, 00:23:56.396 "enable_quickack": false, 00:23:56.396 "enable_placement_id": 0, 00:23:56.396 "enable_zerocopy_send_server": true, 00:23:56.396 "enable_zerocopy_send_client": false, 00:23:56.396 "zerocopy_threshold": 0, 00:23:56.396 "tls_version": 0, 00:23:56.396 "enable_ktls": false 00:23:56.396 } 00:23:56.396 } 00:23:56.396 ] 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "subsystem": "vmd", 00:23:56.396 "config": [] 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "subsystem": "accel", 00:23:56.396 "config": [ 00:23:56.396 { 00:23:56.396 "method": "accel_set_options", 00:23:56.396 "params": { 00:23:56.396 "small_cache_size": 128, 00:23:56.396 "large_cache_size": 16, 00:23:56.396 "task_count": 2048, 00:23:56.396 "sequence_count": 2048, 00:23:56.396 "buf_count": 2048 00:23:56.396 } 00:23:56.396 } 00:23:56.396 ] 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "subsystem": "bdev", 00:23:56.396 "config": [ 00:23:56.396 { 00:23:56.396 "method": "bdev_set_options", 00:23:56.396 "params": { 00:23:56.396 "bdev_io_pool_size": 65535, 00:23:56.396 "bdev_io_cache_size": 256, 00:23:56.396 "bdev_auto_examine": true, 00:23:56.396 "iobuf_small_cache_size": 128, 00:23:56.396 "iobuf_large_cache_size": 16 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "bdev_raid_set_options", 00:23:56.396 "params": { 00:23:56.396 "process_window_size_kb": 1024 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "bdev_iscsi_set_options", 00:23:56.396 "params": { 00:23:56.396 "timeout_sec": 30 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "bdev_nvme_set_options", 00:23:56.396 "params": { 00:23:56.396 "action_on_timeout": "none", 00:23:56.396 "timeout_us": 0, 00:23:56.396 "timeout_admin_us": 0, 00:23:56.396 "keep_alive_timeout_ms": 10000, 00:23:56.396 "arbitration_burst": 0, 00:23:56.396 "low_priority_weight": 0, 00:23:56.396 "medium_priority_weight": 0, 00:23:56.396 "high_priority_weight": 0, 00:23:56.396 "nvme_adminq_poll_period_us": 10000, 00:23:56.396 "nvme_ioq_poll_period_us": 0, 00:23:56.396 "io_queue_requests": 0, 00:23:56.396 "delay_cmd_submit": true, 00:23:56.396 "transport_retry_count": 4, 00:23:56.396 "bdev_retry_count": 3, 00:23:56.396 "transport_ack_timeout": 0, 00:23:56.396 "ctrlr_loss_timeout_sec": 0, 00:23:56.396 "reconnect_delay_sec": 0, 00:23:56.396 "fast_io_fail_timeout_sec": 0, 00:23:56.396 "disable_auto_failback": false, 00:23:56.396 "generate_uuids": false, 00:23:56.396 "transport_tos": 0, 00:23:56.396 "nvme_error_stat": false, 00:23:56.396 "rdma_srq_size": 0, 00:23:56.396 "io_path_stat": false, 00:23:56.396 "allow_accel_sequence": false, 00:23:56.396 "rdma_max_cq_size": 0, 00:23:56.396 "rdma_cm_event_timeout_ms": 0, 00:23:56.396 "dhchap_digests": [ 00:23:56.396 "sha256", 00:23:56.396 "sha384", 00:23:56.396 "sha512" 00:23:56.396 ], 00:23:56.396 "dhchap_dhgroups": [ 00:23:56.396 "null", 00:23:56.396 "ffdhe2048", 00:23:56.396 "ffdhe3072", 00:23:56.396 "ffdhe4096", 00:23:56.396 "ffdhe6144", 00:23:56.396 "ffdhe8192" 00:23:56.396 ] 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "bdev_nvme_set_hotplug", 00:23:56.396 "params": { 00:23:56.396 "period_us": 100000, 00:23:56.396 "enable": false 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "bdev_malloc_create", 00:23:56.396 "params": { 00:23:56.396 "name": "malloc0", 00:23:56.396 "num_blocks": 8192, 00:23:56.396 "block_size": 4096, 00:23:56.396 "physical_block_size": 4096, 00:23:56.396 "uuid": "8ab2aa61-4dd1-4079-862b-483a43178e04", 00:23:56.396 "optimal_io_boundary": 0 00:23:56.396 } 00:23:56.396 }, 00:23:56.396 { 00:23:56.396 "method": "bdev_wait_for_examine" 00:23:56.396 } 00:23:56.396 ] 00:23:56.396 }, 00:23:56.396 { 00:23:56.397 "subsystem": "nbd", 00:23:56.397 "config": [] 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "subsystem": "scheduler", 00:23:56.397 "config": [ 00:23:56.397 { 00:23:56.397 "method": "framework_set_scheduler", 00:23:56.397 "params": { 00:23:56.397 "name": "static" 00:23:56.397 } 00:23:56.397 } 00:23:56.397 ] 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "subsystem": "nvmf", 00:23:56.397 "config": [ 00:23:56.397 { 00:23:56.397 "method": "nvmf_set_config", 00:23:56.397 "params": { 00:23:56.397 "discovery_filter": "match_any", 00:23:56.397 "admin_cmd_passthru": { 00:23:56.397 "identify_ctrlr": false 00:23:56.397 } 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_set_max_subsystems", 00:23:56.397 "params": { 00:23:56.397 "max_subsystems": 1024 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_set_crdt", 00:23:56.397 "params": { 00:23:56.397 "crdt1": 0, 00:23:56.397 "crdt2": 0, 00:23:56.397 "crdt3": 0 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_create_transport", 00:23:56.397 "params": { 00:23:56.397 "trtype": "TCP", 00:23:56.397 "max_queue_depth": 128, 00:23:56.397 "max_io_qpairs_per_ctrlr": 127, 00:23:56.397 "in_capsule_data_size": 4096, 00:23:56.397 "max_io_size": 131072, 00:23:56.397 "io_unit_size": 131072, 00:23:56.397 "max_aq_depth": 128, 00:23:56.397 "num_shared_buffers": 511, 00:23:56.397 "buf_cache_size": 4294967295, 00:23:56.397 "dif_insert_or_strip": false, 00:23:56.397 "zcopy": false, 00:23:56.397 "c2h_success": false, 00:23:56.397 "sock_priority": 0, 00:23:56.397 "abort_timeout_sec": 1, 00:23:56.397 "ack_timeout": 0, 00:23:56.397 "data_wr_pool_size": 0 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_create_subsystem", 00:23:56.397 "params": { 00:23:56.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.397 "allow_any_host": false, 00:23:56.397 "serial_number": "00000000000000000000", 00:23:56.397 "model_number": "SPDK bdev Controller", 00:23:56.397 "max_namespaces": 32, 00:23:56.397 "min_cntlid": 1, 00:23:56.397 "max_cntlid": 65519, 00:23:56.397 "ana_reporting": false 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_subsystem_add_host", 00:23:56.397 "params": { 00:23:56.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.397 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.397 "psk": "key0" 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_subsystem_add_ns", 00:23:56.397 "params": { 00:23:56.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.397 "namespace": { 00:23:56.397 "nsid": 1, 00:23:56.397 "bdev_name": "malloc0", 00:23:56.397 "nguid": "8AB2AA614DD14079862B483A43178E04", 00:23:56.397 "uuid": "8ab2aa61-4dd1-4079-862b-483a43178e04", 00:23:56.397 "no_auto_visible": false 00:23:56.397 } 00:23:56.397 } 00:23:56.397 }, 00:23:56.397 { 00:23:56.397 "method": "nvmf_subsystem_add_listener", 00:23:56.397 "params": { 00:23:56.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.397 "listen_address": { 00:23:56.397 "trtype": "TCP", 00:23:56.397 "adrfam": "IPv4", 00:23:56.397 "traddr": "10.0.0.2", 00:23:56.397 "trsvcid": "4420" 00:23:56.397 }, 00:23:56.397 "secure_channel": true 00:23:56.397 } 00:23:56.397 } 00:23:56.397 ] 00:23:56.397 } 00:23:56.397 ] 00:23:56.397 }' 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=293283 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 293283 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 293283 ']' 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.397 11:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.397 [2024-07-11 11:10:10.676381] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:56.397 [2024-07-11 11:10:10.676459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.397 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.397 [2024-07-11 11:10:10.738154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.657 [2024-07-11 11:10:10.821117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.657 [2024-07-11 11:10:10.821170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.657 [2024-07-11 11:10:10.821199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.657 [2024-07-11 11:10:10.821211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.657 [2024-07-11 11:10:10.821220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.657 [2024-07-11 11:10:10.821298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.657 [2024-07-11 11:10:11.063160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.917 [2024-07-11 11:10:11.095166] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.917 [2024-07-11 11:10:11.104900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=293432 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 293432 /var/tmp/bdevperf.sock 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 293432 ']' 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.487 11:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:57.487 "subsystems": [ 00:23:57.487 { 00:23:57.487 "subsystem": "keyring", 00:23:57.487 "config": [ 00:23:57.487 { 00:23:57.487 "method": "keyring_file_add_key", 00:23:57.487 "params": { 00:23:57.487 "name": "key0", 00:23:57.487 "path": "/tmp/tmp.05UvFswSm3" 00:23:57.487 } 00:23:57.487 } 00:23:57.487 ] 00:23:57.487 }, 00:23:57.487 { 00:23:57.487 "subsystem": "iobuf", 00:23:57.487 "config": [ 00:23:57.487 { 00:23:57.487 "method": "iobuf_set_options", 00:23:57.488 "params": { 00:23:57.488 "small_pool_count": 8192, 00:23:57.488 "large_pool_count": 1024, 00:23:57.488 "small_bufsize": 8192, 00:23:57.488 "large_bufsize": 135168 00:23:57.488 } 00:23:57.488 } 00:23:57.488 ] 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "subsystem": "sock", 00:23:57.488 "config": [ 00:23:57.488 { 00:23:57.488 "method": "sock_set_default_impl", 00:23:57.488 "params": { 00:23:57.488 "impl_name": "posix" 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "sock_impl_set_options", 00:23:57.488 "params": { 00:23:57.488 "impl_name": "ssl", 00:23:57.488 "recv_buf_size": 4096, 00:23:57.488 "send_buf_size": 4096, 00:23:57.488 "enable_recv_pipe": true, 00:23:57.488 "enable_quickack": false, 00:23:57.488 "enable_placement_id": 0, 00:23:57.488 "enable_zerocopy_send_server": true, 00:23:57.488 "enable_zerocopy_send_client": false, 00:23:57.488 "zerocopy_threshold": 0, 00:23:57.488 "tls_version": 0, 00:23:57.488 "enable_ktls": false 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "sock_impl_set_options", 00:23:57.488 "params": { 00:23:57.488 "impl_name": "posix", 00:23:57.488 "recv_buf_size": 2097152, 00:23:57.488 "send_buf_size": 2097152, 00:23:57.488 "enable_recv_pipe": true, 00:23:57.488 "enable_quickack": false, 00:23:57.488 "enable_placement_id": 0, 00:23:57.488 "enable_zerocopy_send_server": true, 00:23:57.488 "enable_zerocopy_send_client": false, 00:23:57.488 "zerocopy_threshold": 0, 00:23:57.488 "tls_version": 0, 00:23:57.488 "enable_ktls": false 00:23:57.488 } 00:23:57.488 } 00:23:57.488 ] 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "subsystem": "vmd", 00:23:57.488 "config": [] 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "subsystem": "accel", 00:23:57.488 "config": [ 00:23:57.488 { 00:23:57.488 "method": "accel_set_options", 00:23:57.488 "params": { 00:23:57.488 "small_cache_size": 128, 00:23:57.488 "large_cache_size": 16, 00:23:57.488 "task_count": 2048, 00:23:57.488 "sequence_count": 2048, 00:23:57.488 "buf_count": 2048 00:23:57.488 } 00:23:57.488 } 00:23:57.488 ] 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "subsystem": "bdev", 00:23:57.488 "config": [ 00:23:57.488 { 00:23:57.488 "method": "bdev_set_options", 00:23:57.488 "params": { 00:23:57.488 "bdev_io_pool_size": 65535, 00:23:57.488 "bdev_io_cache_size": 256, 00:23:57.488 "bdev_auto_examine": true, 00:23:57.488 "iobuf_small_cache_size": 128, 00:23:57.488 "iobuf_large_cache_size": 16 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_raid_set_options", 00:23:57.488 "params": { 00:23:57.488 "process_window_size_kb": 1024 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_iscsi_set_options", 00:23:57.488 "params": { 00:23:57.488 "timeout_sec": 30 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_nvme_set_options", 00:23:57.488 "params": { 00:23:57.488 "action_on_timeout": "none", 00:23:57.488 "timeout_us": 0, 00:23:57.488 "timeout_admin_us": 0, 00:23:57.488 "keep_alive_timeout_ms": 10000, 00:23:57.488 "arbitration_burst": 0, 00:23:57.488 "low_priority_weight": 0, 00:23:57.488 "medium_priority_weight": 0, 00:23:57.488 "high_priority_weight": 0, 00:23:57.488 "nvme_adminq_poll_period_us": 10000, 00:23:57.488 "nvme_ioq_poll_period_us": 0, 00:23:57.488 "io_queue_requests": 512, 00:23:57.488 "delay_cmd_submit": true, 00:23:57.488 "transport_retry_count": 4, 00:23:57.488 "bdev_retry_count": 3, 00:23:57.488 "transport_ack_timeout": 0, 00:23:57.488 "ctrlr_loss_timeout_sec": 0, 00:23:57.488 "reconnect_delay_sec": 0, 00:23:57.488 "fast_io_fail_timeout_sec": 0, 00:23:57.488 "disable_auto_failback": false, 00:23:57.488 "generate_uuids": false, 00:23:57.488 "transport_tos": 0, 00:23:57.488 "nvme_error_stat": false, 00:23:57.488 "rdma_srq_size": 0, 00:23:57.488 "io_path_stat": false, 00:23:57.488 "allow_accel_sequence": false, 00:23:57.488 "rdma_max_cq_size": 0, 00:23:57.488 "rdma_cm_event_timeout_ms": 0, 00:23:57.488 "dhchap_digests": [ 00:23:57.488 "sha256", 00:23:57.488 "sha384", 00:23:57.488 "sha512" 00:23:57.488 ], 00:23:57.488 "dhchap_dhgroups": [ 00:23:57.488 "null", 00:23:57.488 "ffdhe2048", 00:23:57.488 "ffdhe3072", 00:23:57.488 "ffdhe4096", 00:23:57.488 "ffdhe6144", 00:23:57.488 "ffdhe8192" 00:23:57.488 ] 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_nvme_attach_controller", 00:23:57.488 "params": { 00:23:57.488 "name": "nvme0", 00:23:57.488 "trtype": "TCP", 00:23:57.488 "adrfam": "IPv4", 00:23:57.488 "traddr": "10.0.0.2", 00:23:57.488 "trsvcid": "4420", 00:23:57.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.488 "prchk_reftag": false, 00:23:57.488 "prchk_guard": false, 00:23:57.488 "ctrlr_loss_timeout_sec": 0, 00:23:57.488 "reconnect_delay_sec": 0, 00:23:57.488 "fast_io_fail_timeout_sec": 0, 00:23:57.488 "psk": "key0", 00:23:57.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.488 "hdgst": false, 00:23:57.488 "ddgst": false 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_nvme_set_hotplug", 00:23:57.488 "params": { 00:23:57.488 "period_us": 100000, 00:23:57.488 "enable": false 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_enable_histogram", 00:23:57.488 "params": { 00:23:57.488 "name": "nvme0n1", 00:23:57.488 "enable": true 00:23:57.488 } 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "method": "bdev_wait_for_examine" 00:23:57.488 } 00:23:57.488 ] 00:23:57.488 }, 00:23:57.488 { 00:23:57.488 "subsystem": "nbd", 00:23:57.488 "config": [] 00:23:57.488 } 00:23:57.488 ] 00:23:57.488 }' 00:23:57.488 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.488 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.488 11:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.488 [2024-07-11 11:10:11.676463] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:23:57.488 [2024-07-11 11:10:11.676537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293432 ] 00:23:57.488 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.488 [2024-07-11 11:10:11.732661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.488 [2024-07-11 11:10:11.815801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.747 [2024-07-11 11:10:11.992376] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.313 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.313 11:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:58.313 11:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.313 11:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:58.571 11:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.571 11:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.571 Running I/O for 1 seconds... 00:23:59.949 00:23:59.949 Latency(us) 00:23:59.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.949 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:59.949 Verification LBA range: start 0x0 length 0x2000 00:23:59.949 nvme0n1 : 1.02 3435.35 13.42 0.00 0.00 36826.41 6165.24 38059.43 00:23:59.949 =================================================================================================================== 00:23:59.949 Total : 3435.35 13.42 0.00 0.00 36826.41 6165.24 38059.43 00:23:59.949 0 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:59.949 nvmf_trace.0 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 293432 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 293432 ']' 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 293432 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 293432 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 293432' 00:23:59.949 killing process with pid 293432 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 293432 00:23:59.949 Received shutdown signal, test time was about 1.000000 seconds 00:23:59.949 00:23:59.949 Latency(us) 00:23:59.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.949 =================================================================================================================== 00:23:59.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 293432 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.949 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.949 rmmod nvme_tcp 00:23:59.949 rmmod nvme_fabrics 00:24:00.208 rmmod nvme_keyring 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 293283 ']' 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 293283 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 293283 ']' 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 293283 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 293283 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 293283' 00:24:00.208 killing process with pid 293283 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 293283 00:24:00.208 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 293283 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.467 11:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.371 11:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.371 11:10:16 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MFUUnPKv7U /tmp/tmp.o1aLeRwNAy /tmp/tmp.05UvFswSm3 00:24:02.371 00:24:02.371 real 1m18.449s 00:24:02.371 user 2m2.522s 00:24:02.371 sys 0m26.440s 00:24:02.371 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.371 11:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.371 ************************************ 00:24:02.371 END TEST nvmf_tls 00:24:02.371 ************************************ 00:24:02.371 11:10:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.371 11:10:16 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:02.371 11:10:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.371 11:10:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.371 11:10:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.371 ************************************ 00:24:02.371 START TEST nvmf_fips 00:24:02.371 ************************************ 00:24:02.371 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:02.631 * Looking for test storage... 00:24:02.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:02.631 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:02.632 11:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:02.632 Error setting digest 00:24:02.632 00F294341A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:02.632 00F294341A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.632 11:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:05.167 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:05.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:05.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:05.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:05.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:24:05.168 00:24:05.168 --- 10.0.0.2 ping statistics --- 00:24:05.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.168 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:24:05.168 00:24:05.168 --- 10.0.0.1 ping statistics --- 00:24:05.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.168 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=295673 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 295673 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 295673 ']' 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.168 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.168 [2024-07-11 11:10:19.372460] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:24:05.168 [2024-07-11 11:10:19.372540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.168 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.168 [2024-07-11 11:10:19.437266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.168 [2024-07-11 11:10:19.525025] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.168 [2024-07-11 11:10:19.525120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.168 [2024-07-11 11:10:19.525145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.168 [2024-07-11 11:10:19.525156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.168 [2024-07-11 11:10:19.525179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.168 [2024-07-11 11:10:19.525207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:05.427 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.685 [2024-07-11 11:10:19.891531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.685 [2024-07-11 11:10:19.907525] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.685 [2024-07-11 11:10:19.907725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.685 [2024-07-11 11:10:19.937579] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:05.685 malloc0 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=295824 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 295824 /var/tmp/bdevperf.sock 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 295824 ']' 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.685 11:10:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:05.685 [2024-07-11 11:10:20.027414] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:24:05.685 [2024-07-11 11:10:20.027505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295824 ] 00:24:05.685 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.685 [2024-07-11 11:10:20.087665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.943 [2024-07-11 11:10:20.174024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.943 11:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.943 11:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:05.943 11:10:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:06.201 [2024-07-11 11:10:20.494951] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.201 [2024-07-11 11:10:20.495119] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.201 TLSTESTn1 00:24:06.201 11:10:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.461 Running I/O for 10 seconds... 00:24:16.446 00:24:16.446 Latency(us) 00:24:16.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.446 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:16.446 Verification LBA range: start 0x0 length 0x2000 00:24:16.446 TLSTESTn1 : 10.02 3052.43 11.92 0.00 0.00 41859.39 9272.13 48351.00 00:24:16.446 =================================================================================================================== 00:24:16.446 Total : 3052.43 11.92 0.00 0.00 41859.39 9272.13 48351.00 00:24:16.446 0 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:16.446 nvmf_trace.0 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 295824 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 295824 ']' 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 295824 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 295824 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 295824' 00:24:16.446 killing process with pid 295824 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 295824 00:24:16.446 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.446 00:24:16.446 Latency(us) 00:24:16.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.446 =================================================================================================================== 00:24:16.446 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.446 [2024-07-11 11:10:30.849079] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:16.446 11:10:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 295824 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.706 rmmod nvme_tcp 00:24:16.706 rmmod nvme_fabrics 00:24:16.706 rmmod nvme_keyring 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 295673 ']' 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 295673 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 295673 ']' 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 295673 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.706 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 295673 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 295673' 00:24:16.965 killing process with pid 295673 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 295673 00:24:16.965 [2024-07-11 11:10:31.137455] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 295673 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.965 11:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.499 00:24:19.499 real 0m16.643s 00:24:19.499 user 0m17.801s 00:24:19.499 sys 0m6.888s 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.499 ************************************ 00:24:19.499 END TEST nvmf_fips 00:24:19.499 ************************************ 00:24:19.499 11:10:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:19.499 11:10:33 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:19.499 11:10:33 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.499 11:10:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:19.499 11:10:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.499 11:10:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:19.499 ************************************ 00:24:19.499 START TEST nvmf_fuzz 00:24:19.499 ************************************ 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:19.499 * Looking for test storage... 00:24:19.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.499 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.500 11:10:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.401 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.401 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:24:21.401 00:24:21.401 --- 10.0.0.2 ping statistics --- 00:24:21.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.401 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:24:21.401 00:24:21.401 --- 10.0.0.1 ping statistics --- 00:24:21.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.401 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=299066 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 299066 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 299066 ']' 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.401 11:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.660 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.660 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:21.660 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.660 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.660 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.660 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.661 Malloc0 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.661 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:21.920 11:10:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.920 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:21.920 11:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:53.988 Fuzzing completed. Shutting down the fuzz application 00:24:53.988 00:24:53.988 Dumping successful admin opcodes: 00:24:53.988 8, 9, 10, 24, 00:24:53.988 Dumping successful io opcodes: 00:24:53.988 0, 9, 00:24:53.988 NS: 0x200003aeff00 I/O qp, Total commands completed: 496435, total successful commands: 2859, random_seed: 1160257792 00:24:53.988 NS: 0x200003aeff00 admin qp, Total commands completed: 60512, total successful commands: 479, random_seed: 1203587520 00:24:53.988 11:11:06 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:53.988 Fuzzing completed. Shutting down the fuzz application 00:24:53.988 00:24:53.988 Dumping successful admin opcodes: 00:24:53.988 24, 00:24:53.988 Dumping successful io opcodes: 00:24:53.988 00:24:53.988 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4134508467 00:24:53.988 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4134626796 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.988 rmmod nvme_tcp 00:24:53.988 rmmod nvme_fabrics 00:24:53.988 rmmod nvme_keyring 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:53.988 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 299066 ']' 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 299066 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 299066 ']' 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 299066 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 299066 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 299066' 00:24:53.989 killing process with pid 299066 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 299066 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 299066 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.989 11:11:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.529 11:11:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.529 11:11:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:56.529 00:24:56.529 real 0m36.981s 00:24:56.529 user 0m50.904s 00:24:56.529 sys 0m15.120s 00:24:56.529 11:11:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.529 11:11:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:56.529 ************************************ 00:24:56.529 END TEST nvmf_fuzz 00:24:56.529 ************************************ 00:24:56.529 11:11:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:56.529 11:11:10 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:56.529 11:11:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.530 11:11:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.530 11:11:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.530 ************************************ 00:24:56.530 START TEST nvmf_multiconnection 00:24:56.530 ************************************ 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:56.530 * Looking for test storage... 00:24:56.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.530 11:11:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:58.434 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:58.434 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:58.434 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.434 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:24:58.434 00:24:58.434 --- 10.0.0.2 ping statistics --- 00:24:58.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.434 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:24:58.434 00:24:58.434 --- 10.0.0.1 ping statistics --- 00:24:58.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.434 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.434 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=304681 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 304681 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 304681 ']' 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.693 11:11:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.694 [2024-07-11 11:11:12.907325] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:24:58.694 [2024-07-11 11:11:12.907411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.694 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.694 [2024-07-11 11:11:12.987567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.694 [2024-07-11 11:11:13.090743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.694 [2024-07-11 11:11:13.090838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.694 [2024-07-11 11:11:13.090863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.694 [2024-07-11 11:11:13.090886] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.694 [2024-07-11 11:11:13.090906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.694 [2024-07-11 11:11:13.090985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.694 [2024-07-11 11:11:13.091070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.694 [2024-07-11 11:11:13.091145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.694 [2024-07-11 11:11:13.091137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 [2024-07-11 11:11:13.263822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 Malloc1 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 [2024-07-11 11:11:13.319091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 Malloc2 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.952 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 Malloc3 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 Malloc4 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 Malloc5 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 Malloc6 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.211 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 Malloc7 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.212 Malloc8 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.212 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.469 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.469 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:59.469 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.469 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 Malloc9 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 Malloc10 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 Malloc11 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.470 11:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:00.034 11:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:00.034 11:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:00.034 11:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.034 11:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:00.034 11:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:02.556 11:11:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:02.556 11:11:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:02.556 11:11:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:02.556 11:11:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:02.557 11:11:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:02.557 11:11:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:02.557 11:11:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:02.557 11:11:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:02.814 11:11:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:02.814 11:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.814 11:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.814 11:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.814 11:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.712 11:11:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:05.646 11:11:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:05.646 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:05.646 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.646 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:05.646 11:11:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.542 11:11:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:08.108 11:11:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:08.108 11:11:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:08.108 11:11:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.108 11:11:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:08.108 11:11:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.631 11:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:10.888 11:11:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:10.888 11:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.888 11:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.888 11:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.888 11:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.784 11:11:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:13.782 11:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:13.782 11:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.782 11:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.782 11:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.782 11:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.749 11:11:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:16.685 11:11:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:16.685 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.685 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.685 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:16.685 11:11:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.594 11:11:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:19.530 11:11:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:19.530 11:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:19.530 11:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.530 11:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:19.530 11:11:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.426 11:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:22.361 11:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:22.361 11:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.361 11:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.361 11:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:22.361 11:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.263 11:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:25.195 11:11:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:25.195 11:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:25.195 11:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.195 11:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:25.195 11:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.094 11:11:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:27.661 11:11:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:27.661 11:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.661 11:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.661 11:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.661 11:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:30.189 11:11:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:30.189 [global] 00:25:30.189 thread=1 00:25:30.189 invalidate=1 00:25:30.189 rw=read 00:25:30.189 time_based=1 00:25:30.189 runtime=10 00:25:30.189 ioengine=libaio 00:25:30.189 direct=1 00:25:30.189 bs=262144 00:25:30.189 iodepth=64 00:25:30.189 norandommap=1 00:25:30.189 numjobs=1 00:25:30.189 00:25:30.189 [job0] 00:25:30.189 filename=/dev/nvme0n1 00:25:30.189 [job1] 00:25:30.189 filename=/dev/nvme10n1 00:25:30.189 [job2] 00:25:30.189 filename=/dev/nvme1n1 00:25:30.189 [job3] 00:25:30.189 filename=/dev/nvme2n1 00:25:30.189 [job4] 00:25:30.189 filename=/dev/nvme3n1 00:25:30.189 [job5] 00:25:30.189 filename=/dev/nvme4n1 00:25:30.189 [job6] 00:25:30.189 filename=/dev/nvme5n1 00:25:30.189 [job7] 00:25:30.189 filename=/dev/nvme6n1 00:25:30.189 [job8] 00:25:30.189 filename=/dev/nvme7n1 00:25:30.189 [job9] 00:25:30.189 filename=/dev/nvme8n1 00:25:30.189 [job10] 00:25:30.189 filename=/dev/nvme9n1 00:25:30.189 Could not set queue depth (nvme0n1) 00:25:30.189 Could not set queue depth (nvme10n1) 00:25:30.189 Could not set queue depth (nvme1n1) 00:25:30.189 Could not set queue depth (nvme2n1) 00:25:30.189 Could not set queue depth (nvme3n1) 00:25:30.189 Could not set queue depth (nvme4n1) 00:25:30.189 Could not set queue depth (nvme5n1) 00:25:30.189 Could not set queue depth (nvme6n1) 00:25:30.189 Could not set queue depth (nvme7n1) 00:25:30.189 Could not set queue depth (nvme8n1) 00:25:30.189 Could not set queue depth (nvme9n1) 00:25:30.189 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.190 fio-3.35 00:25:30.190 Starting 11 threads 00:25:42.392 00:25:42.392 job0: (groupid=0, jobs=1): err= 0: pid=308936: Thu Jul 11 11:11:54 2024 00:25:42.392 read: IOPS=555, BW=139MiB/s (146MB/s)(1401MiB/10088msec) 00:25:42.392 slat (usec): min=8, max=88812, avg=1223.73, stdev=5513.36 00:25:42.392 clat (msec): min=2, max=268, avg=113.94, stdev=55.59 00:25:42.392 lat (msec): min=2, max=292, avg=115.17, stdev=56.41 00:25:42.392 clat percentiles (msec): 00:25:42.392 | 1.00th=[ 4], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 54], 00:25:42.392 | 30.00th=[ 92], 40.00th=[ 113], 50.00th=[ 126], 60.00th=[ 136], 00:25:42.392 | 70.00th=[ 146], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 197], 00:25:42.392 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 259], 99.95th=[ 264], 00:25:42.392 | 99.99th=[ 271] 00:25:42.392 bw ( KiB/s): min=93696, max=314880, per=7.70%, avg=141759.15, stdev=60322.02, samples=20 00:25:42.392 iops : min= 366, max= 1230, avg=553.70, stdev=235.66, samples=20 00:25:42.392 lat (msec) : 4=1.21%, 10=1.54%, 20=3.41%, 50=12.92%, 100=14.48% 00:25:42.392 lat (msec) : 250=66.32%, 500=0.12% 00:25:42.392 cpu : usr=0.18%, sys=1.46%, ctx=1030, majf=0, minf=4097 00:25:42.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:42.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.392 issued rwts: total=5602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.392 job1: (groupid=0, jobs=1): err= 0: pid=308937: Thu Jul 11 11:11:54 2024 00:25:42.392 read: IOPS=686, BW=172MiB/s (180MB/s)(1729MiB/10069msec) 00:25:42.392 slat (usec): min=8, max=168326, avg=947.51, stdev=4176.82 00:25:42.392 clat (usec): min=1370, max=285783, avg=92152.22, stdev=48373.64 00:25:42.392 lat (usec): min=1391, max=344807, avg=93099.73, stdev=48920.89 00:25:42.392 clat percentiles (msec): 00:25:42.392 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 42], 00:25:42.392 | 30.00th=[ 65], 40.00th=[ 83], 50.00th=[ 94], 60.00th=[ 103], 00:25:42.392 | 70.00th=[ 120], 80.00th=[ 136], 90.00th=[ 153], 95.00th=[ 171], 00:25:42.392 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 259], 99.95th=[ 279], 00:25:42.392 | 99.99th=[ 288] 00:25:42.392 bw ( KiB/s): min=101888, max=308224, per=9.53%, avg=175420.65, stdev=64517.03, samples=20 00:25:42.393 iops : min= 398, max= 1204, avg=685.20, stdev=252.02, samples=20 00:25:42.393 lat (msec) : 2=0.13%, 4=0.58%, 10=2.04%, 20=3.80%, 50=17.46% 00:25:42.393 lat (msec) : 100=34.08%, 250=41.74%, 500=0.17% 00:25:42.393 cpu : usr=0.29%, sys=1.71%, ctx=1273, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=6917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job2: (groupid=0, jobs=1): err= 0: pid=308945: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=564, BW=141MiB/s (148MB/s)(1425MiB/10087msec) 00:25:42.393 slat (usec): min=8, max=116361, avg=873.52, stdev=5079.29 00:25:42.393 clat (msec): min=2, max=277, avg=112.35, stdev=48.13 00:25:42.393 lat (msec): min=2, max=290, avg=113.23, stdev=48.78 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 43], 20.00th=[ 71], 00:25:42.393 | 30.00th=[ 91], 40.00th=[ 102], 50.00th=[ 113], 60.00th=[ 127], 00:25:42.393 | 70.00th=[ 140], 80.00th=[ 153], 90.00th=[ 174], 95.00th=[ 188], 00:25:42.393 | 99.00th=[ 228], 99.50th=[ 239], 99.90th=[ 259], 99.95th=[ 275], 00:25:42.393 | 99.99th=[ 279] 00:25:42.393 bw ( KiB/s): min=81920, max=273920, per=7.83%, avg=144220.65, stdev=47678.83, samples=20 00:25:42.393 iops : min= 320, max= 1070, avg=563.35, stdev=186.23, samples=20 00:25:42.393 lat (msec) : 4=0.04%, 10=1.39%, 20=1.07%, 50=9.78%, 100=26.83% 00:25:42.393 lat (msec) : 250=60.67%, 500=0.23% 00:25:42.393 cpu : usr=0.26%, sys=1.36%, ctx=1100, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=5698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job3: (groupid=0, jobs=1): err= 0: pid=308946: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=766, BW=192MiB/s (201MB/s)(1929MiB/10071msec) 00:25:42.393 slat (usec): min=9, max=94497, avg=1135.26, stdev=4093.04 00:25:42.393 clat (msec): min=2, max=265, avg=82.35, stdev=45.56 00:25:42.393 lat (msec): min=2, max=275, avg=83.48, stdev=46.18 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 16], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 37], 00:25:42.393 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 74], 60.00th=[ 86], 00:25:42.393 | 70.00th=[ 102], 80.00th=[ 126], 90.00th=[ 146], 95.00th=[ 167], 00:25:42.393 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 247], 99.95th=[ 247], 00:25:42.393 | 99.99th=[ 266] 00:25:42.393 bw ( KiB/s): min=78848, max=351744, per=10.64%, avg=195882.80, stdev=74849.34, samples=20 00:25:42.393 iops : min= 308, max= 1374, avg=765.05, stdev=292.41, samples=20 00:25:42.393 lat (msec) : 4=0.16%, 10=0.04%, 20=1.89%, 50=27.34%, 100=39.94% 00:25:42.393 lat (msec) : 250=30.62%, 500=0.03% 00:25:42.393 cpu : usr=0.47%, sys=2.40%, ctx=1217, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=7715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job4: (groupid=0, jobs=1): err= 0: pid=308947: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=696, BW=174MiB/s (183MB/s)(1751MiB/10050msec) 00:25:42.393 slat (usec): min=8, max=90646, avg=984.16, stdev=4352.86 00:25:42.393 clat (usec): min=1233, max=233426, avg=90815.26, stdev=48503.36 00:25:42.393 lat (usec): min=1254, max=258790, avg=91799.42, stdev=49200.49 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 44], 00:25:42.393 | 30.00th=[ 65], 40.00th=[ 78], 50.00th=[ 89], 60.00th=[ 102], 00:25:42.393 | 70.00th=[ 115], 80.00th=[ 134], 90.00th=[ 159], 95.00th=[ 176], 00:25:42.393 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 224], 99.95th=[ 232], 00:25:42.393 | 99.99th=[ 234] 00:25:42.393 bw ( KiB/s): min=93883, max=294400, per=9.65%, avg=177620.65, stdev=61728.36, samples=20 00:25:42.393 iops : min= 366, max= 1150, avg=693.75, stdev=241.20, samples=20 00:25:42.393 lat (msec) : 2=0.21%, 4=0.67%, 10=1.86%, 20=4.93%, 50=14.54% 00:25:42.393 lat (msec) : 100=37.33%, 250=40.46% 00:25:42.393 cpu : usr=0.26%, sys=1.78%, ctx=1263, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=7002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job5: (groupid=0, jobs=1): err= 0: pid=308948: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=618, BW=155MiB/s (162MB/s)(1560MiB/10086msec) 00:25:42.393 slat (usec): min=8, max=128806, avg=868.48, stdev=5304.56 00:25:42.393 clat (usec): min=1247, max=311607, avg=102497.64, stdev=52415.71 00:25:42.393 lat (usec): min=1328, max=318594, avg=103366.12, stdev=53088.03 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 51], 00:25:42.393 | 30.00th=[ 69], 40.00th=[ 91], 50.00th=[ 109], 60.00th=[ 122], 00:25:42.393 | 70.00th=[ 134], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 184], 00:25:42.393 | 99.00th=[ 222], 99.50th=[ 236], 99.90th=[ 284], 99.95th=[ 284], 00:25:42.393 | 99.99th=[ 313] 00:25:42.393 bw ( KiB/s): min=73216, max=279504, per=8.59%, avg=158111.80, stdev=49208.92, samples=20 00:25:42.393 iops : min= 286, max= 1091, avg=617.55, stdev=192.11, samples=20 00:25:42.393 lat (msec) : 2=0.06%, 4=0.27%, 10=2.21%, 20=3.81%, 50=13.11% 00:25:42.393 lat (msec) : 100=24.92%, 250=55.34%, 500=0.27% 00:25:42.393 cpu : usr=0.31%, sys=1.42%, ctx=1160, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=6241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job6: (groupid=0, jobs=1): err= 0: pid=308949: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=774, BW=194MiB/s (203MB/s)(1952MiB/10088msec) 00:25:42.393 slat (usec): min=8, max=133360, avg=997.65, stdev=4376.98 00:25:42.393 clat (msec): min=2, max=320, avg=81.62, stdev=56.17 00:25:42.393 lat (msec): min=2, max=320, avg=82.62, stdev=56.95 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 10], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 34], 00:25:42.393 | 30.00th=[ 39], 40.00th=[ 48], 50.00th=[ 68], 60.00th=[ 86], 00:25:42.393 | 70.00th=[ 107], 80.00th=[ 127], 90.00th=[ 163], 95.00th=[ 192], 00:25:42.393 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 288], 99.95th=[ 300], 00:25:42.393 | 99.99th=[ 321] 00:25:42.393 bw ( KiB/s): min=78336, max=431616, per=10.77%, avg=198232.80, stdev=101044.22, samples=20 00:25:42.393 iops : min= 306, max= 1686, avg=774.30, stdev=394.71, samples=20 00:25:42.393 lat (msec) : 4=0.23%, 10=0.93%, 20=4.15%, 50=36.92%, 100=25.20% 00:25:42.393 lat (msec) : 250=31.36%, 500=1.20% 00:25:42.393 cpu : usr=0.35%, sys=2.06%, ctx=1346, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=7809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job7: (groupid=0, jobs=1): err= 0: pid=308950: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=751, BW=188MiB/s (197MB/s)(1886MiB/10041msec) 00:25:42.393 slat (usec): min=10, max=117643, avg=1201.44, stdev=4573.88 00:25:42.393 clat (msec): min=4, max=242, avg=83.95, stdev=50.94 00:25:42.393 lat (msec): min=4, max=312, avg=85.15, stdev=51.76 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 34], 00:25:42.393 | 30.00th=[ 43], 40.00th=[ 56], 50.00th=[ 72], 60.00th=[ 97], 00:25:42.393 | 70.00th=[ 116], 80.00th=[ 129], 90.00th=[ 153], 95.00th=[ 182], 00:25:42.393 | 99.00th=[ 215], 99.50th=[ 230], 99.90th=[ 241], 99.95th=[ 241], 00:25:42.393 | 99.99th=[ 243] 00:25:42.393 bw ( KiB/s): min=90112, max=429056, per=10.40%, avg=191435.65, stdev=99858.32, samples=20 00:25:42.393 iops : min= 352, max= 1676, avg=747.70, stdev=390.13, samples=20 00:25:42.393 lat (msec) : 10=0.64%, 20=2.03%, 50=33.12%, 100=25.38%, 250=38.84% 00:25:42.393 cpu : usr=0.44%, sys=2.29%, ctx=1115, majf=0, minf=4097 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=7542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job8: (groupid=0, jobs=1): err= 0: pid=308954: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=630, BW=158MiB/s (165MB/s)(1589MiB/10090msec) 00:25:42.393 slat (usec): min=8, max=82285, avg=1100.54, stdev=4472.10 00:25:42.393 clat (usec): min=782, max=254491, avg=100398.75, stdev=43071.42 00:25:42.393 lat (usec): min=811, max=254510, avg=101499.28, stdev=43658.61 00:25:42.393 clat percentiles (msec): 00:25:42.393 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 40], 20.00th=[ 66], 00:25:42.393 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 101], 60.00th=[ 115], 00:25:42.393 | 70.00th=[ 128], 80.00th=[ 138], 90.00th=[ 153], 95.00th=[ 165], 00:25:42.393 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 230], 99.95th=[ 241], 00:25:42.393 | 99.99th=[ 255] 00:25:42.393 bw ( KiB/s): min=96768, max=264175, per=8.75%, avg=161085.40, stdev=43040.76, samples=20 00:25:42.393 iops : min= 378, max= 1031, avg=629.15, stdev=168.03, samples=20 00:25:42.393 lat (usec) : 1000=0.03% 00:25:42.393 lat (msec) : 2=0.09%, 4=0.80%, 10=2.09%, 20=2.75%, 50=7.25% 00:25:42.393 lat (msec) : 100=36.61%, 250=50.35%, 500=0.02% 00:25:42.393 cpu : usr=0.29%, sys=1.89%, ctx=1149, majf=0, minf=3722 00:25:42.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:42.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.393 issued rwts: total=6357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.393 job9: (groupid=0, jobs=1): err= 0: pid=308956: Thu Jul 11 11:11:54 2024 00:25:42.393 read: IOPS=514, BW=129MiB/s (135MB/s)(1294MiB/10069msec) 00:25:42.393 slat (usec): min=10, max=108638, avg=1591.70, stdev=5438.07 00:25:42.393 clat (usec): min=1770, max=268903, avg=122802.76, stdev=41500.74 00:25:42.393 lat (usec): min=1826, max=293188, avg=124394.46, stdev=42202.31 00:25:42.393 clat percentiles (msec): 00:25:42.394 | 1.00th=[ 31], 5.00th=[ 62], 10.00th=[ 70], 20.00th=[ 85], 00:25:42.394 | 30.00th=[ 102], 40.00th=[ 116], 50.00th=[ 126], 60.00th=[ 133], 00:25:42.394 | 70.00th=[ 142], 80.00th=[ 150], 90.00th=[ 176], 95.00th=[ 201], 00:25:42.394 | 99.00th=[ 232], 99.50th=[ 241], 99.90th=[ 255], 99.95th=[ 271], 00:25:42.394 | 99.99th=[ 271] 00:25:42.394 bw ( KiB/s): min=78848, max=218624, per=7.11%, avg=130893.90, stdev=35305.22, samples=20 00:25:42.394 iops : min= 308, max= 854, avg=511.30, stdev=137.91, samples=20 00:25:42.394 lat (msec) : 2=0.02%, 20=0.12%, 50=2.57%, 100=26.46%, 250=70.66% 00:25:42.394 lat (msec) : 500=0.17% 00:25:42.394 cpu : usr=0.32%, sys=1.62%, ctx=983, majf=0, minf=4097 00:25:42.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.394 issued rwts: total=5177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.394 job10: (groupid=0, jobs=1): err= 0: pid=308957: Thu Jul 11 11:11:54 2024 00:25:42.394 read: IOPS=645, BW=161MiB/s (169MB/s)(1627MiB/10087msec) 00:25:42.394 slat (usec): min=8, max=98576, avg=921.10, stdev=4510.91 00:25:42.394 clat (usec): min=1279, max=260089, avg=98185.41, stdev=51108.09 00:25:42.394 lat (usec): min=1295, max=266862, avg=99106.51, stdev=51827.84 00:25:42.394 clat percentiles (msec): 00:25:42.394 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 46], 00:25:42.394 | 30.00th=[ 65], 40.00th=[ 82], 50.00th=[ 101], 60.00th=[ 117], 00:25:42.394 | 70.00th=[ 131], 80.00th=[ 144], 90.00th=[ 163], 95.00th=[ 176], 00:25:42.394 | 99.00th=[ 218], 99.50th=[ 230], 99.90th=[ 245], 99.95th=[ 247], 00:25:42.394 | 99.99th=[ 262] 00:25:42.394 bw ( KiB/s): min=93184, max=351232, per=8.96%, avg=164995.00, stdev=62507.87, samples=20 00:25:42.394 iops : min= 364, max= 1372, avg=644.50, stdev=244.18, samples=20 00:25:42.394 lat (msec) : 2=0.11%, 4=0.43%, 10=2.00%, 20=2.66%, 50=16.78% 00:25:42.394 lat (msec) : 100=28.01%, 250=49.98%, 500=0.05% 00:25:42.394 cpu : usr=0.27%, sys=1.46%, ctx=1231, majf=0, minf=4097 00:25:42.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:42.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:42.394 issued rwts: total=6509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:42.394 00:25:42.394 Run status group 0 (all jobs): 00:25:42.394 READ: bw=1798MiB/s (1885MB/s), 129MiB/s-194MiB/s (135MB/s-203MB/s), io=17.7GiB (19.0GB), run=10041-10090msec 00:25:42.394 00:25:42.394 Disk stats (read/write): 00:25:42.394 nvme0n1: ios=11076/0, merge=0/0, ticks=1243274/0, in_queue=1243274, util=97.14% 00:25:42.394 nvme10n1: ios=13575/0, merge=0/0, ticks=1243665/0, in_queue=1243665, util=97.36% 00:25:42.394 nvme1n1: ios=11206/0, merge=0/0, ticks=1240762/0, in_queue=1240762, util=97.64% 00:25:42.394 nvme2n1: ios=15192/0, merge=0/0, ticks=1240447/0, in_queue=1240447, util=97.78% 00:25:42.394 nvme3n1: ios=13735/0, merge=0/0, ticks=1243682/0, in_queue=1243682, util=97.87% 00:25:42.394 nvme4n1: ios=12264/0, merge=0/0, ticks=1241422/0, in_queue=1241422, util=98.21% 00:25:42.394 nvme5n1: ios=15468/0, merge=0/0, ticks=1232491/0, in_queue=1232491, util=98.37% 00:25:42.394 nvme6n1: ios=14791/0, merge=0/0, ticks=1240815/0, in_queue=1240815, util=98.47% 00:25:42.394 nvme7n1: ios=12522/0, merge=0/0, ticks=1235913/0, in_queue=1235913, util=98.88% 00:25:42.394 nvme8n1: ios=10109/0, merge=0/0, ticks=1237786/0, in_queue=1237786, util=99.07% 00:25:42.394 nvme9n1: ios=12750/0, merge=0/0, ticks=1240556/0, in_queue=1240556, util=99.22% 00:25:42.394 11:11:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:42.394 [global] 00:25:42.394 thread=1 00:25:42.394 invalidate=1 00:25:42.394 rw=randwrite 00:25:42.394 time_based=1 00:25:42.394 runtime=10 00:25:42.394 ioengine=libaio 00:25:42.394 direct=1 00:25:42.394 bs=262144 00:25:42.394 iodepth=64 00:25:42.394 norandommap=1 00:25:42.394 numjobs=1 00:25:42.394 00:25:42.394 [job0] 00:25:42.394 filename=/dev/nvme0n1 00:25:42.394 [job1] 00:25:42.394 filename=/dev/nvme10n1 00:25:42.394 [job2] 00:25:42.394 filename=/dev/nvme1n1 00:25:42.394 [job3] 00:25:42.394 filename=/dev/nvme2n1 00:25:42.394 [job4] 00:25:42.394 filename=/dev/nvme3n1 00:25:42.394 [job5] 00:25:42.394 filename=/dev/nvme4n1 00:25:42.394 [job6] 00:25:42.394 filename=/dev/nvme5n1 00:25:42.394 [job7] 00:25:42.394 filename=/dev/nvme6n1 00:25:42.394 [job8] 00:25:42.394 filename=/dev/nvme7n1 00:25:42.394 [job9] 00:25:42.394 filename=/dev/nvme8n1 00:25:42.394 [job10] 00:25:42.394 filename=/dev/nvme9n1 00:25:42.394 Could not set queue depth (nvme0n1) 00:25:42.394 Could not set queue depth (nvme10n1) 00:25:42.394 Could not set queue depth (nvme1n1) 00:25:42.394 Could not set queue depth (nvme2n1) 00:25:42.394 Could not set queue depth (nvme3n1) 00:25:42.394 Could not set queue depth (nvme4n1) 00:25:42.394 Could not set queue depth (nvme5n1) 00:25:42.394 Could not set queue depth (nvme6n1) 00:25:42.394 Could not set queue depth (nvme7n1) 00:25:42.394 Could not set queue depth (nvme8n1) 00:25:42.394 Could not set queue depth (nvme9n1) 00:25:42.394 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:42.394 fio-3.35 00:25:42.394 Starting 11 threads 00:25:52.369 00:25:52.369 job0: (groupid=0, jobs=1): err= 0: pid=309970: Thu Jul 11 11:12:05 2024 00:25:52.369 write: IOPS=415, BW=104MiB/s (109MB/s)(1065MiB/10263msec); 0 zone resets 00:25:52.369 slat (usec): min=18, max=142335, avg=1775.19, stdev=5471.62 00:25:52.369 clat (usec): min=1175, max=559852, avg=152312.82, stdev=103659.55 00:25:52.369 lat (usec): min=1790, max=559915, avg=154088.01, stdev=104792.82 00:25:52.369 clat percentiles (msec): 00:25:52.369 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 43], 00:25:52.369 | 30.00th=[ 60], 40.00th=[ 113], 50.00th=[ 159], 60.00th=[ 182], 00:25:52.369 | 70.00th=[ 226], 80.00th=[ 251], 90.00th=[ 279], 95.00th=[ 305], 00:25:52.369 | 99.00th=[ 409], 99.50th=[ 489], 99.90th=[ 542], 99.95th=[ 542], 00:25:52.369 | 99.99th=[ 558] 00:25:52.369 bw ( KiB/s): min=55296, max=359217, per=7.84%, avg=107407.25, stdev=66331.73, samples=20 00:25:52.369 iops : min= 216, max= 1403, avg=419.55, stdev=259.07, samples=20 00:25:52.369 lat (msec) : 2=0.05%, 4=0.59%, 10=2.70%, 20=4.41%, 50=20.54% 00:25:52.369 lat (msec) : 100=10.59%, 250=40.87%, 500=19.84%, 750=0.42% 00:25:52.369 cpu : usr=1.14%, sys=1.14%, ctx=2143, majf=0, minf=1 00:25:52.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.369 issued rwts: total=0,4260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.369 job1: (groupid=0, jobs=1): err= 0: pid=309982: Thu Jul 11 11:12:05 2024 00:25:52.369 write: IOPS=585, BW=146MiB/s (153MB/s)(1472MiB/10066msec); 0 zone resets 00:25:52.369 slat (usec): min=19, max=134126, avg=1112.28, stdev=3881.37 00:25:52.369 clat (usec): min=1064, max=391236, avg=108241.47, stdev=77281.56 00:25:52.369 lat (usec): min=1130, max=391264, avg=109353.75, stdev=78148.42 00:25:52.369 clat percentiles (msec): 00:25:52.369 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 48], 00:25:52.369 | 30.00th=[ 51], 40.00th=[ 65], 50.00th=[ 91], 60.00th=[ 115], 00:25:52.369 | 70.00th=[ 132], 80.00th=[ 178], 90.00th=[ 234], 95.00th=[ 249], 00:25:52.369 | 99.00th=[ 309], 99.50th=[ 347], 99.90th=[ 380], 99.95th=[ 384], 00:25:52.369 | 99.99th=[ 393] 00:25:52.369 bw ( KiB/s): min=61440, max=320000, per=10.89%, avg=149137.85, stdev=71155.42, samples=20 00:25:52.369 iops : min= 240, max= 1250, avg=582.55, stdev=277.97, samples=20 00:25:52.369 lat (msec) : 2=0.19%, 4=0.53%, 10=2.28%, 20=3.46%, 50=19.41% 00:25:52.369 lat (msec) : 100=29.63%, 250=39.80%, 500=4.70% 00:25:52.369 cpu : usr=1.56%, sys=2.07%, ctx=3393, majf=0, minf=1 00:25:52.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.369 issued rwts: total=0,5889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.369 job2: (groupid=0, jobs=1): err= 0: pid=309983: Thu Jul 11 11:12:05 2024 00:25:52.369 write: IOPS=493, BW=123MiB/s (129MB/s)(1242MiB/10056msec); 0 zone resets 00:25:52.369 slat (usec): min=15, max=130486, avg=1312.39, stdev=4795.16 00:25:52.369 clat (usec): min=693, max=464791, avg=128171.84, stdev=89261.27 00:25:52.369 lat (usec): min=726, max=464853, avg=129484.23, stdev=90339.15 00:25:52.369 clat percentiles (msec): 00:25:52.369 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 15], 20.00th=[ 40], 00:25:52.369 | 30.00th=[ 72], 40.00th=[ 108], 50.00th=[ 122], 60.00th=[ 136], 00:25:52.369 | 70.00th=[ 161], 80.00th=[ 209], 90.00th=[ 255], 95.00th=[ 279], 00:25:52.369 | 99.00th=[ 372], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 435], 00:25:52.369 | 99.99th=[ 464] 00:25:52.369 bw ( KiB/s): min=61440, max=238080, per=9.17%, avg=125537.90, stdev=50621.95, samples=20 00:25:52.369 iops : min= 240, max= 930, avg=490.35, stdev=197.78, samples=20 00:25:52.369 lat (usec) : 750=0.08%, 1000=0.08% 00:25:52.369 lat (msec) : 2=0.64%, 4=2.52%, 10=3.74%, 20=6.10%, 50=10.97% 00:25:52.369 lat (msec) : 100=12.86%, 250=52.31%, 500=10.69% 00:25:52.369 cpu : usr=1.55%, sys=1.51%, ctx=3136, majf=0, minf=1 00:25:52.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.369 issued rwts: total=0,4967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.369 job3: (groupid=0, jobs=1): err= 0: pid=309984: Thu Jul 11 11:12:05 2024 00:25:52.369 write: IOPS=536, BW=134MiB/s (141MB/s)(1352MiB/10078msec); 0 zone resets 00:25:52.369 slat (usec): min=17, max=144477, avg=1138.92, stdev=5472.25 00:25:52.369 clat (usec): min=1381, max=377302, avg=118075.34, stdev=75575.02 00:25:52.369 lat (usec): min=1448, max=381806, avg=119214.26, stdev=76279.43 00:25:52.369 clat percentiles (msec): 00:25:52.369 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 47], 00:25:52.369 | 30.00th=[ 68], 40.00th=[ 91], 50.00th=[ 114], 60.00th=[ 134], 00:25:52.369 | 70.00th=[ 155], 80.00th=[ 180], 90.00th=[ 232], 95.00th=[ 264], 00:25:52.369 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 372], 00:25:52.369 | 99.99th=[ 376] 00:25:52.369 bw ( KiB/s): min=63488, max=254464, per=9.99%, avg=136816.40, stdev=49347.95, samples=20 00:25:52.369 iops : min= 248, max= 994, avg=534.40, stdev=192.80, samples=20 00:25:52.369 lat (msec) : 2=0.11%, 4=1.07%, 10=4.09%, 20=4.27%, 50=11.86% 00:25:52.369 lat (msec) : 100=24.39%, 250=47.03%, 500=7.18% 00:25:52.369 cpu : usr=1.56%, sys=1.60%, ctx=3358, majf=0, minf=1 00:25:52.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.369 issued rwts: total=0,5407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.369 job4: (groupid=0, jobs=1): err= 0: pid=309985: Thu Jul 11 11:12:05 2024 00:25:52.369 write: IOPS=356, BW=89.1MiB/s (93.4MB/s)(915MiB/10270msec); 0 zone resets 00:25:52.369 slat (usec): min=24, max=62601, avg=2349.19, stdev=5545.22 00:25:52.369 clat (usec): min=1596, max=595885, avg=177182.44, stdev=92230.46 00:25:52.369 lat (usec): min=1627, max=595946, avg=179531.63, stdev=93491.67 00:25:52.369 clat percentiles (msec): 00:25:52.369 | 1.00th=[ 11], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 73], 00:25:52.369 | 30.00th=[ 128], 40.00th=[ 155], 50.00th=[ 174], 60.00th=[ 211], 00:25:52.369 | 70.00th=[ 245], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 300], 00:25:52.369 | 99.00th=[ 401], 99.50th=[ 468], 99.90th=[ 567], 99.95th=[ 600], 00:25:52.369 | 99.99th=[ 600] 00:25:52.369 bw ( KiB/s): min=51200, max=266773, per=6.72%, avg=92084.25, stdev=50976.73, samples=20 00:25:52.369 iops : min= 200, max= 1042, avg=359.70, stdev=199.11, samples=20 00:25:52.369 lat (msec) : 2=0.03%, 4=0.14%, 10=0.66%, 20=1.12%, 50=4.10% 00:25:52.369 lat (msec) : 100=19.54%, 250=47.50%, 500=26.43%, 750=0.49% 00:25:52.369 cpu : usr=1.06%, sys=1.04%, ctx=1491, majf=0, minf=1 00:25:52.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.369 issued rwts: total=0,3659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.369 job5: (groupid=0, jobs=1): err= 0: pid=309986: Thu Jul 11 11:12:05 2024 00:25:52.369 write: IOPS=428, BW=107MiB/s (112MB/s)(1100MiB/10257msec); 0 zone resets 00:25:52.369 slat (usec): min=16, max=236583, avg=1278.81, stdev=6696.02 00:25:52.369 clat (usec): min=946, max=519469, avg=147621.00, stdev=102440.19 00:25:52.369 lat (usec): min=1023, max=519570, avg=148899.81, stdev=103406.76 00:25:52.369 clat percentiles (msec): 00:25:52.369 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 17], 20.00th=[ 33], 00:25:52.369 | 30.00th=[ 64], 40.00th=[ 115], 50.00th=[ 157], 60.00th=[ 178], 00:25:52.369 | 70.00th=[ 215], 80.00th=[ 239], 90.00th=[ 279], 95.00th=[ 305], 00:25:52.369 | 99.00th=[ 405], 99.50th=[ 447], 99.90th=[ 510], 99.95th=[ 518], 00:25:52.369 | 99.99th=[ 518] 00:25:52.369 bw ( KiB/s): min=56832, max=240640, per=8.10%, avg=110987.40, stdev=39602.03, samples=20 00:25:52.369 iops : min= 222, max= 940, avg=433.50, stdev=154.66, samples=20 00:25:52.369 lat (usec) : 1000=0.02% 00:25:52.369 lat (msec) : 2=0.18%, 4=0.64%, 10=4.86%, 20=6.64%, 50=14.44% 00:25:52.369 lat (msec) : 100=10.46%, 250=46.40%, 500=16.21%, 750=0.16% 00:25:52.370 cpu : usr=1.22%, sys=1.36%, ctx=3101, majf=0, minf=1 00:25:52.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:52.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.370 issued rwts: total=0,4399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.370 job6: (groupid=0, jobs=1): err= 0: pid=309988: Thu Jul 11 11:12:05 2024 00:25:52.370 write: IOPS=590, BW=148MiB/s (155MB/s)(1515MiB/10264msec); 0 zone resets 00:25:52.370 slat (usec): min=14, max=121318, avg=900.21, stdev=3620.54 00:25:52.370 clat (usec): min=690, max=495319, avg=107401.54, stdev=78714.00 00:25:52.370 lat (usec): min=726, max=495384, avg=108301.75, stdev=79226.64 00:25:52.370 clat percentiles (msec): 00:25:52.370 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 37], 00:25:52.370 | 30.00th=[ 63], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 113], 00:25:52.370 | 70.00th=[ 131], 80.00th=[ 155], 90.00th=[ 207], 95.00th=[ 257], 00:25:52.370 | 99.00th=[ 376], 99.50th=[ 456], 99.90th=[ 485], 99.95th=[ 489], 00:25:52.370 | 99.99th=[ 498] 00:25:52.370 bw ( KiB/s): min=85504, max=291840, per=11.21%, avg=153538.35, stdev=49200.29, samples=20 00:25:52.370 iops : min= 334, max= 1140, avg=599.75, stdev=192.20, samples=20 00:25:52.370 lat (usec) : 750=0.02%, 1000=0.21% 00:25:52.370 lat (msec) : 2=0.54%, 4=0.71%, 10=3.27%, 20=6.80%, 50=14.96% 00:25:52.370 lat (msec) : 100=26.07%, 250=42.15%, 500=5.26% 00:25:52.370 cpu : usr=1.75%, sys=1.79%, ctx=4015, majf=0, minf=1 00:25:52.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:52.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.370 issued rwts: total=0,6061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.370 job7: (groupid=0, jobs=1): err= 0: pid=309989: Thu Jul 11 11:12:05 2024 00:25:52.370 write: IOPS=508, BW=127MiB/s (133MB/s)(1305MiB/10262msec); 0 zone resets 00:25:52.370 slat (usec): min=16, max=38172, avg=1348.38, stdev=3508.59 00:25:52.370 clat (usec): min=896, max=459344, avg=124341.93, stdev=70697.39 00:25:52.370 lat (usec): min=938, max=459386, avg=125690.31, stdev=71296.49 00:25:52.370 clat percentiles (msec): 00:25:52.370 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 63], 00:25:52.370 | 30.00th=[ 90], 40.00th=[ 107], 50.00th=[ 122], 60.00th=[ 138], 00:25:52.370 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 207], 95.00th=[ 243], 00:25:52.370 | 99.00th=[ 393], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 451], 00:25:52.370 | 99.99th=[ 460] 00:25:52.370 bw ( KiB/s): min=75624, max=187392, per=9.64%, avg=132037.20, stdev=32836.80, samples=20 00:25:52.370 iops : min= 295, max= 732, avg=515.75, stdev=128.31, samples=20 00:25:52.370 lat (usec) : 1000=0.11% 00:25:52.370 lat (msec) : 2=0.19%, 4=1.00%, 10=2.20%, 20=3.60%, 50=10.02% 00:25:52.370 lat (msec) : 100=19.40%, 250=60.18%, 500=3.29% 00:25:52.370 cpu : usr=1.47%, sys=1.61%, ctx=2967, majf=0, minf=1 00:25:52.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:52.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.370 issued rwts: total=0,5221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.370 job8: (groupid=0, jobs=1): err= 0: pid=309992: Thu Jul 11 11:12:05 2024 00:25:52.370 write: IOPS=507, BW=127MiB/s (133MB/s)(1302MiB/10271msec); 0 zone resets 00:25:52.370 slat (usec): min=20, max=169702, avg=1075.05, stdev=5060.12 00:25:52.370 clat (usec): min=974, max=544180, avg=125023.72, stdev=92026.83 00:25:52.370 lat (usec): min=1009, max=544235, avg=126098.77, stdev=92831.06 00:25:52.370 clat percentiles (msec): 00:25:52.370 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 24], 20.00th=[ 45], 00:25:52.370 | 30.00th=[ 65], 40.00th=[ 83], 50.00th=[ 104], 60.00th=[ 138], 00:25:52.370 | 70.00th=[ 159], 80.00th=[ 186], 90.00th=[ 266], 95.00th=[ 296], 00:25:52.370 | 99.00th=[ 443], 99.50th=[ 468], 99.90th=[ 531], 99.95th=[ 535], 00:25:52.370 | 99.99th=[ 542] 00:25:52.370 bw ( KiB/s): min=50688, max=228864, per=9.62%, avg=131696.85, stdev=52002.66, samples=20 00:25:52.370 iops : min= 198, max= 894, avg=514.40, stdev=203.16, samples=20 00:25:52.370 lat (usec) : 1000=0.04% 00:25:52.370 lat (msec) : 2=0.27%, 4=0.61%, 10=2.57%, 20=5.05%, 50=13.75% 00:25:52.370 lat (msec) : 100=26.86%, 250=38.90%, 500=11.67%, 750=0.27% 00:25:52.370 cpu : usr=1.50%, sys=1.75%, ctx=3503, majf=0, minf=1 00:25:52.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:52.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.370 issued rwts: total=0,5208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.370 job9: (groupid=0, jobs=1): err= 0: pid=309993: Thu Jul 11 11:12:05 2024 00:25:52.370 write: IOPS=521, BW=130MiB/s (137MB/s)(1340MiB/10275msec); 0 zone resets 00:25:52.370 slat (usec): min=19, max=151080, avg=1156.67, stdev=4052.74 00:25:52.370 clat (usec): min=896, max=538691, avg=121433.65, stdev=88205.09 00:25:52.370 lat (usec): min=936, max=538756, avg=122590.32, stdev=89056.75 00:25:52.370 clat percentiles (msec): 00:25:52.370 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 18], 20.00th=[ 37], 00:25:52.370 | 30.00th=[ 68], 40.00th=[ 91], 50.00th=[ 107], 60.00th=[ 122], 00:25:52.370 | 70.00th=[ 148], 80.00th=[ 203], 90.00th=[ 249], 95.00th=[ 279], 00:25:52.370 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 535], 00:25:52.370 | 99.99th=[ 542] 00:25:52.370 bw ( KiB/s): min=62464, max=230912, per=9.90%, avg=135612.50, stdev=44785.48, samples=20 00:25:52.370 iops : min= 244, max= 902, avg=529.70, stdev=174.98, samples=20 00:25:52.370 lat (usec) : 1000=0.04% 00:25:52.370 lat (msec) : 2=0.15%, 4=0.37%, 10=2.82%, 20=9.07%, 50=12.11% 00:25:52.370 lat (msec) : 100=22.78%, 250=43.14%, 500=9.46%, 750=0.07% 00:25:52.370 cpu : usr=1.52%, sys=1.50%, ctx=3427, majf=0, minf=1 00:25:52.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:52.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.370 issued rwts: total=0,5361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.370 job10: (groupid=0, jobs=1): err= 0: pid=309994: Thu Jul 11 11:12:05 2024 00:25:52.370 write: IOPS=442, BW=111MiB/s (116MB/s)(1136MiB/10271msec); 0 zone resets 00:25:52.370 slat (usec): min=18, max=110319, avg=1335.65, stdev=5415.64 00:25:52.370 clat (usec): min=954, max=593227, avg=143082.57, stdev=105621.28 00:25:52.370 lat (usec): min=1001, max=593312, avg=144418.22, stdev=106664.35 00:25:52.370 clat percentiles (msec): 00:25:52.370 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 41], 00:25:52.370 | 30.00th=[ 51], 40.00th=[ 103], 50.00th=[ 146], 60.00th=[ 163], 00:25:52.370 | 70.00th=[ 184], 80.00th=[ 241], 90.00th=[ 296], 95.00th=[ 321], 00:25:52.370 | 99.00th=[ 388], 99.50th=[ 477], 99.90th=[ 584], 99.95th=[ 584], 00:25:52.370 | 99.99th=[ 592] 00:25:52.370 bw ( KiB/s): min=53248, max=315392, per=8.37%, avg=114647.35, stdev=56661.59, samples=20 00:25:52.370 iops : min= 208, max= 1232, avg=447.80, stdev=221.34, samples=20 00:25:52.370 lat (usec) : 1000=0.04% 00:25:52.370 lat (msec) : 2=0.37%, 4=1.54%, 10=5.22%, 20=5.09%, 50=17.68% 00:25:52.370 lat (msec) : 100=9.36%, 250=41.59%, 500=18.63%, 750=0.48% 00:25:52.370 cpu : usr=1.27%, sys=1.70%, ctx=3053, majf=0, minf=1 00:25:52.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:52.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:52.370 issued rwts: total=0,4542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:52.370 00:25:52.370 Run status group 0 (all jobs): 00:25:52.370 WRITE: bw=1338MiB/s (1403MB/s), 89.1MiB/s-148MiB/s (93.4MB/s-155MB/s), io=13.4GiB (14.4GB), run=10056-10275msec 00:25:52.370 00:25:52.370 Disk stats (read/write): 00:25:52.370 nvme0n1: ios=49/8464, merge=0/0, ticks=38/1235799, in_queue=1235837, util=97.18% 00:25:52.370 nvme10n1: ios=38/11471, merge=0/0, ticks=30/1221656, in_queue=1221686, util=97.26% 00:25:52.370 nvme1n1: ios=42/9622, merge=0/0, ticks=912/1225944, in_queue=1226856, util=99.93% 00:25:52.370 nvme2n1: ios=51/10577, merge=0/0, ticks=3185/1188774, in_queue=1191959, util=99.93% 00:25:52.370 nvme3n1: ios=0/7262, merge=0/0, ticks=0/1231190, in_queue=1231190, util=97.76% 00:25:52.370 nvme4n1: ios=46/8751, merge=0/0, ticks=4793/1200010, in_queue=1204803, util=100.00% 00:25:52.370 nvme5n1: ios=0/12065, merge=0/0, ticks=0/1246405, in_queue=1246405, util=98.19% 00:25:52.370 nvme6n1: ios=41/10387, merge=0/0, ticks=695/1235252, in_queue=1235947, util=99.91% 00:25:52.370 nvme7n1: ios=29/10360, merge=0/0, ticks=3189/1226647, in_queue=1229836, util=100.00% 00:25:52.370 nvme8n1: ios=0/10656, merge=0/0, ticks=0/1241033, in_queue=1241033, util=98.95% 00:25:52.370 nvme9n1: ios=26/9027, merge=0/0, ticks=773/1240607, in_queue=1241380, util=100.00% 00:25:52.370 11:12:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:52.370 11:12:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:52.370 11:12:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.370 11:12:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:52.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.370 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:52.371 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.371 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:52.630 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.630 11:12:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:52.888 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.888 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:53.148 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.148 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:53.407 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.407 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:53.665 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.665 11:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:53.665 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.665 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:53.924 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.924 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:54.182 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:54.182 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.182 rmmod nvme_tcp 00:25:54.182 rmmod nvme_fabrics 00:25:54.182 rmmod nvme_keyring 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.182 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 304681 ']' 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 304681 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 304681 ']' 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 304681 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.183 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 304681 00:25:54.441 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:54.441 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:54.441 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 304681' 00:25:54.441 killing process with pid 304681 00:25:54.441 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 304681 00:25:54.441 11:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 304681 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.698 11:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.229 11:12:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.229 00:25:57.229 real 1m0.669s 00:25:57.229 user 3m21.342s 00:25:57.229 sys 0m24.310s 00:25:57.229 11:12:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.229 11:12:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.229 ************************************ 00:25:57.229 END TEST nvmf_multiconnection 00:25:57.229 ************************************ 00:25:57.229 11:12:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:57.229 11:12:11 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:57.229 11:12:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.229 11:12:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.229 11:12:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:57.229 ************************************ 00:25:57.229 START TEST nvmf_initiator_timeout 00:25:57.229 ************************************ 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:57.229 * Looking for test storage... 00:25:57.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:57.229 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.230 11:12:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:59.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:59.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.132 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:59.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:59.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:25:59.133 00:25:59.133 --- 10.0.0.2 ping statistics --- 00:25:59.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.133 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:25:59.133 00:25:59.133 --- 10.0.0.1 ping statistics --- 00:25:59.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.133 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=313964 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 313964 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 313964 ']' 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.133 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 [2024-07-11 11:12:13.576087] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:25:59.391 [2024-07-11 11:12:13.576187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.391 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.391 [2024-07-11 11:12:13.646490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.391 [2024-07-11 11:12:13.732314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.391 [2024-07-11 11:12:13.732368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.391 [2024-07-11 11:12:13.732393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.391 [2024-07-11 11:12:13.732405] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.391 [2024-07-11 11:12:13.732414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.391 [2024-07-11 11:12:13.732557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.391 [2024-07-11 11:12:13.732621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.391 [2024-07-11 11:12:13.732650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.391 [2024-07-11 11:12:13.732651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 Malloc0 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 Delay0 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 [2024-07-11 11:12:13.919791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 [2024-07-11 11:12:13.948049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.651 11:12:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:00.217 11:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:00.217 11:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:00.217 11:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.217 11:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:00.217 11:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=314378 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:02.754 11:12:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:02.754 [global] 00:26:02.754 thread=1 00:26:02.754 invalidate=1 00:26:02.754 rw=write 00:26:02.754 time_based=1 00:26:02.754 runtime=60 00:26:02.754 ioengine=libaio 00:26:02.754 direct=1 00:26:02.754 bs=4096 00:26:02.754 iodepth=1 00:26:02.754 norandommap=0 00:26:02.754 numjobs=1 00:26:02.754 00:26:02.754 verify_dump=1 00:26:02.754 verify_backlog=512 00:26:02.754 verify_state_save=0 00:26:02.754 do_verify=1 00:26:02.754 verify=crc32c-intel 00:26:02.754 [job0] 00:26:02.754 filename=/dev/nvme0n1 00:26:02.754 Could not set queue depth (nvme0n1) 00:26:02.754 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:02.754 fio-3.35 00:26:02.754 Starting 1 thread 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 true 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 true 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 true 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:05.290 true 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.290 11:12:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 true 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 true 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 true 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.586 true 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:08.586 11:12:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 314378 00:27:04.817 00:27:04.817 job0: (groupid=0, jobs=1): err= 0: pid=314453: Thu Jul 11 11:13:17 2024 00:27:04.817 read: IOPS=204, BW=819KiB/s (839kB/s)(48.0MiB/60001msec) 00:27:04.817 slat (usec): min=4, max=2102, avg=12.58, stdev=20.04 00:27:04.817 clat (usec): min=197, max=41007k, avg=4643.87, stdev=369974.65 00:27:04.817 lat (usec): min=204, max=41007k, avg=4656.45, stdev=369974.90 00:27:04.817 clat percentiles (usec): 00:27:04.817 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:27:04.817 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:27:04.817 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 388], 00:27:04.817 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:04.817 | 99.99th=[42206] 00:27:04.817 write: IOPS=208, BW=834KiB/s (854kB/s)(48.9MiB/60001msec); 0 zone resets 00:27:04.817 slat (nsec): min=5848, max=68738, avg=13969.26, stdev=7039.85 00:27:04.817 clat (usec): min=157, max=2877, avg=200.81, stdev=41.21 00:27:04.817 lat (usec): min=164, max=2886, avg=214.78, stdev=45.19 00:27:04.817 clat percentiles (usec): 00:27:04.817 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:27:04.817 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:27:04.817 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 255], 00:27:04.817 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 433], 00:27:04.817 | 99.99th=[ 742] 00:27:04.817 bw ( KiB/s): min= 1024, max= 8840, per=100.00%, avg=6126.93, stdev=2455.97, samples=15 00:27:04.817 iops : min= 256, max= 2210, avg=1531.73, stdev=613.99, samples=15 00:27:04.817 lat (usec) : 250=74.93%, 500=23.40%, 750=0.38%, 1000=0.03% 00:27:04.817 lat (msec) : 2=0.01%, 4=0.01%, 50=1.25%, >=2000=0.01% 00:27:04.817 cpu : usr=0.36%, sys=0.68%, ctx=24806, majf=0, minf=2 00:27:04.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:04.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.817 issued rwts: total=12288,12515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:04.817 00:27:04.817 Run status group 0 (all jobs): 00:27:04.817 READ: bw=819KiB/s (839kB/s), 819KiB/s-819KiB/s (839kB/s-839kB/s), io=48.0MiB (50.3MB), run=60001-60001msec 00:27:04.817 WRITE: bw=834KiB/s (854kB/s), 834KiB/s-834KiB/s (854kB/s-854kB/s), io=48.9MiB (51.3MB), run=60001-60001msec 00:27:04.817 00:27:04.817 Disk stats (read/write): 00:27:04.817 nvme0n1: ios=12146/12288, merge=0/0, ticks=16526/2356, in_queue=18882, util=99.90% 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:04.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:04.817 nvmf hotplug test: fio successful as expected 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.817 rmmod nvme_tcp 00:27:04.817 rmmod nvme_fabrics 00:27:04.817 rmmod nvme_keyring 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 313964 ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 313964 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 313964 ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 313964 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 313964 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 313964' 00:27:04.817 killing process with pid 313964 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 313964 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 313964 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.817 11:13:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.387 11:13:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.387 00:27:05.387 real 1m8.331s 00:27:05.387 user 4m9.980s 00:27:05.387 sys 0m7.881s 00:27:05.387 11:13:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:05.387 11:13:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.387 ************************************ 00:27:05.387 END TEST nvmf_initiator_timeout 00:27:05.387 ************************************ 00:27:05.387 11:13:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:05.387 11:13:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:05.387 11:13:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:05.387 11:13:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:05.387 11:13:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.387 11:13:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:07.291 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:07.291 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:07.291 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:07.291 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:07.291 11:13:21 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.291 11:13:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:07.291 11:13:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.291 11:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.549 ************************************ 00:27:07.549 START TEST nvmf_perf_adq 00:27:07.549 ************************************ 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:07.549 * Looking for test storage... 00:27:07.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.549 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.550 11:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:09.452 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:09.453 11:13:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:10.016 11:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:13.324 11:13:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.600 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:18.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:18.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:18.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:18.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:18.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:27:18.601 00:27:18.601 --- 10.0.0.2 ping statistics --- 00:27:18.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.601 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:18.601 00:27:18.601 --- 10.0.0.1 ping statistics --- 00:27:18.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.601 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=326136 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 326136 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 326136 ']' 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.601 [2024-07-11 11:13:32.677138] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:27:18.601 [2024-07-11 11:13:32.677218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.601 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.601 [2024-07-11 11:13:32.742129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.601 [2024-07-11 11:13:32.829850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.601 [2024-07-11 11:13:32.829906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.601 [2024-07-11 11:13:32.829919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.601 [2024-07-11 11:13:32.829931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.601 [2024-07-11 11:13:32.829940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.601 [2024-07-11 11:13:32.830004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.601 [2024-07-11 11:13:32.830098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.601 [2024-07-11 11:13:32.830165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.601 [2024-07-11 11:13:32.830167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.601 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.602 11:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.861 [2024-07-11 11:13:33.076724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.861 Malloc1 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:18.861 [2024-07-11 11:13:33.129828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=326239 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:18.861 11:13:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.861 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:20.764 "tick_rate": 2700000000, 00:27:20.764 "poll_groups": [ 00:27:20.764 { 00:27:20.764 "name": "nvmf_tgt_poll_group_000", 00:27:20.764 "admin_qpairs": 1, 00:27:20.764 "io_qpairs": 1, 00:27:20.764 "current_admin_qpairs": 1, 00:27:20.764 "current_io_qpairs": 1, 00:27:20.764 "pending_bdev_io": 0, 00:27:20.764 "completed_nvme_io": 20599, 00:27:20.764 "transports": [ 00:27:20.764 { 00:27:20.764 "trtype": "TCP" 00:27:20.764 } 00:27:20.764 ] 00:27:20.764 }, 00:27:20.764 { 00:27:20.764 "name": "nvmf_tgt_poll_group_001", 00:27:20.764 "admin_qpairs": 0, 00:27:20.764 "io_qpairs": 1, 00:27:20.764 "current_admin_qpairs": 0, 00:27:20.764 "current_io_qpairs": 1, 00:27:20.764 "pending_bdev_io": 0, 00:27:20.764 "completed_nvme_io": 20443, 00:27:20.764 "transports": [ 00:27:20.764 { 00:27:20.764 "trtype": "TCP" 00:27:20.764 } 00:27:20.764 ] 00:27:20.764 }, 00:27:20.764 { 00:27:20.764 "name": "nvmf_tgt_poll_group_002", 00:27:20.764 "admin_qpairs": 0, 00:27:20.764 "io_qpairs": 1, 00:27:20.764 "current_admin_qpairs": 0, 00:27:20.764 "current_io_qpairs": 1, 00:27:20.764 "pending_bdev_io": 0, 00:27:20.764 "completed_nvme_io": 20592, 00:27:20.764 "transports": [ 00:27:20.764 { 00:27:20.764 "trtype": "TCP" 00:27:20.764 } 00:27:20.764 ] 00:27:20.764 }, 00:27:20.764 { 00:27:20.764 "name": "nvmf_tgt_poll_group_003", 00:27:20.764 "admin_qpairs": 0, 00:27:20.764 "io_qpairs": 1, 00:27:20.764 "current_admin_qpairs": 0, 00:27:20.764 "current_io_qpairs": 1, 00:27:20.764 "pending_bdev_io": 0, 00:27:20.764 "completed_nvme_io": 20152, 00:27:20.764 "transports": [ 00:27:20.764 { 00:27:20.764 "trtype": "TCP" 00:27:20.764 } 00:27:20.764 ] 00:27:20.764 } 00:27:20.764 ] 00:27:20.764 }' 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:20.764 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:21.022 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:21.022 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:21.022 11:13:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 326239 00:27:29.134 Initializing NVMe Controllers 00:27:29.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:29.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:29.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:29.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:29.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:29.134 Initialization complete. Launching workers. 00:27:29.134 ======================================================== 00:27:29.134 Latency(us) 00:27:29.134 Device Information : IOPS MiB/s Average min max 00:27:29.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10531.90 41.14 6076.46 2228.35 10169.05 00:27:29.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10738.90 41.95 5961.20 2486.11 9864.96 00:27:29.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10742.80 41.96 5959.76 1314.38 10086.69 00:27:29.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10679.60 41.72 5994.58 1752.81 9941.59 00:27:29.134 ======================================================== 00:27:29.134 Total : 42693.20 166.77 5997.62 1314.38 10169.05 00:27:29.134 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.134 rmmod nvme_tcp 00:27:29.134 rmmod nvme_fabrics 00:27:29.134 rmmod nvme_keyring 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 326136 ']' 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 326136 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 326136 ']' 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 326136 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 326136 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 326136' 00:27:29.134 killing process with pid 326136 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 326136 00:27:29.134 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 326136 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.393 11:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.292 11:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.292 11:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:31.292 11:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:31.860 11:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:34.388 11:13:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:39.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:39.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:39.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:39.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.662 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:27:39.663 00:27:39.663 --- 10.0.0.2 ping statistics --- 00:27:39.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.663 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:27:39.663 00:27:39.663 --- 10.0.0.1 ping statistics --- 00:27:39.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.663 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:39.663 net.core.busy_poll = 1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:39.663 net.core.busy_read = 1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=328853 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 328853 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 328853 ']' 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 [2024-07-11 11:13:53.638583] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:27:39.663 [2024-07-11 11:13:53.638688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.663 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.663 [2024-07-11 11:13:53.703015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.663 [2024-07-11 11:13:53.782855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.663 [2024-07-11 11:13:53.782910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.663 [2024-07-11 11:13:53.782932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.663 [2024-07-11 11:13:53.782943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.663 [2024-07-11 11:13:53.782952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.663 [2024-07-11 11:13:53.783040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.663 [2024-07-11 11:13:53.783111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.663 [2024-07-11 11:13:53.783175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.663 [2024-07-11 11:13:53.783178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:39.663 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:39.664 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:39.664 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 [2024-07-11 11:13:54.019785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 Malloc1 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 [2024-07-11 11:13:54.072943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=328884 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:39.664 11:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:39.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:41.821 "tick_rate": 2700000000, 00:27:41.821 "poll_groups": [ 00:27:41.821 { 00:27:41.821 "name": "nvmf_tgt_poll_group_000", 00:27:41.821 "admin_qpairs": 1, 00:27:41.821 "io_qpairs": 3, 00:27:41.821 "current_admin_qpairs": 1, 00:27:41.821 "current_io_qpairs": 3, 00:27:41.821 "pending_bdev_io": 0, 00:27:41.821 "completed_nvme_io": 27056, 00:27:41.821 "transports": [ 00:27:41.821 { 00:27:41.821 "trtype": "TCP" 00:27:41.821 } 00:27:41.821 ] 00:27:41.821 }, 00:27:41.821 { 00:27:41.821 "name": "nvmf_tgt_poll_group_001", 00:27:41.821 "admin_qpairs": 0, 00:27:41.821 "io_qpairs": 1, 00:27:41.821 "current_admin_qpairs": 0, 00:27:41.821 "current_io_qpairs": 1, 00:27:41.821 "pending_bdev_io": 0, 00:27:41.821 "completed_nvme_io": 25690, 00:27:41.821 "transports": [ 00:27:41.821 { 00:27:41.821 "trtype": "TCP" 00:27:41.821 } 00:27:41.821 ] 00:27:41.821 }, 00:27:41.821 { 00:27:41.821 "name": "nvmf_tgt_poll_group_002", 00:27:41.821 "admin_qpairs": 0, 00:27:41.821 "io_qpairs": 0, 00:27:41.821 "current_admin_qpairs": 0, 00:27:41.821 "current_io_qpairs": 0, 00:27:41.821 "pending_bdev_io": 0, 00:27:41.821 "completed_nvme_io": 0, 00:27:41.821 "transports": [ 00:27:41.821 { 00:27:41.821 "trtype": "TCP" 00:27:41.821 } 00:27:41.821 ] 00:27:41.821 }, 00:27:41.821 { 00:27:41.821 "name": "nvmf_tgt_poll_group_003", 00:27:41.821 "admin_qpairs": 0, 00:27:41.821 "io_qpairs": 0, 00:27:41.821 "current_admin_qpairs": 0, 00:27:41.821 "current_io_qpairs": 0, 00:27:41.821 "pending_bdev_io": 0, 00:27:41.821 "completed_nvme_io": 0, 00:27:41.821 "transports": [ 00:27:41.821 { 00:27:41.821 "trtype": "TCP" 00:27:41.821 } 00:27:41.821 ] 00:27:41.821 } 00:27:41.821 ] 00:27:41.821 }' 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:41.821 11:13:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 328884 00:27:49.928 Initializing NVMe Controllers 00:27:49.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:49.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:49.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:49.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:49.928 Initialization complete. Launching workers. 00:27:49.928 ======================================================== 00:27:49.928 Latency(us) 00:27:49.928 Device Information : IOPS MiB/s Average min max 00:27:49.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4851.10 18.95 13196.41 2124.52 59557.20 00:27:49.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4793.40 18.72 13353.93 2052.98 60464.94 00:27:49.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13390.00 52.30 4779.64 1744.28 45670.06 00:27:49.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4530.70 17.70 14165.05 2573.94 61333.79 00:27:49.928 ======================================================== 00:27:49.928 Total : 27565.19 107.68 9294.50 1744.28 61333.79 00:27:49.928 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.928 rmmod nvme_tcp 00:27:49.928 rmmod nvme_fabrics 00:27:49.928 rmmod nvme_keyring 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 328853 ']' 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 328853 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 328853 ']' 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 328853 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 328853 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 328853' 00:27:49.928 killing process with pid 328853 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 328853 00:27:49.928 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 328853 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.185 11:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.210 11:14:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.210 11:14:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:52.210 00:27:52.210 real 0m44.863s 00:27:52.210 user 2m38.437s 00:27:52.210 sys 0m11.061s 00:27:52.210 11:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.210 11:14:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.210 ************************************ 00:27:52.210 END TEST nvmf_perf_adq 00:27:52.210 ************************************ 00:27:52.210 11:14:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:52.211 11:14:06 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:52.211 11:14:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:52.211 11:14:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.211 11:14:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:52.471 ************************************ 00:27:52.471 START TEST nvmf_shutdown 00:27:52.471 ************************************ 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:52.471 * Looking for test storage... 00:27:52.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.471 ************************************ 00:27:52.471 START TEST nvmf_shutdown_tc1 00:27:52.471 ************************************ 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.471 11:14:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.375 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.376 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:27:54.635 00:27:54.635 --- 10.0.0.2 ping statistics --- 00:27:54.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.635 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:27:54.635 00:27:54.635 --- 10.0.0.1 ping statistics --- 00:27:54.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.635 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=332043 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 332043 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 332043 ']' 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.635 11:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.635 [2024-07-11 11:14:08.998931] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:27:54.635 [2024-07-11 11:14:08.999018] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.635 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.894 [2024-07-11 11:14:09.063455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.894 [2024-07-11 11:14:09.153141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.894 [2024-07-11 11:14:09.153197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.894 [2024-07-11 11:14:09.153211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.894 [2024-07-11 11:14:09.153223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.894 [2024-07-11 11:14:09.153232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.894 [2024-07-11 11:14:09.153381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.894 [2024-07-11 11:14:09.154777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.894 [2024-07-11 11:14:09.154823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:54.894 [2024-07-11 11:14:09.154827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.894 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:54.894 [2024-07-11 11:14:09.311604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.152 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.152 Malloc1 00:27:55.152 [2024-07-11 11:14:09.390707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.152 Malloc2 00:27:55.152 Malloc3 00:27:55.152 Malloc4 00:27:55.152 Malloc5 00:27:55.412 Malloc6 00:27:55.412 Malloc7 00:27:55.412 Malloc8 00:27:55.412 Malloc9 00:27:55.412 Malloc10 00:27:55.412 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.412 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:55.412 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:55.412 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=332223 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 332223 /var/tmp/bdevperf.sock 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 332223 ']' 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:55.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.671 "name": "Nvme$subsystem", 00:27:55.671 "trtype": "$TEST_TRANSPORT", 00:27:55.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.671 "adrfam": "ipv4", 00:27:55.671 "trsvcid": "$NVMF_PORT", 00:27:55.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.671 "hdgst": ${hdgst:-false}, 00:27:55.671 "ddgst": ${ddgst:-false} 00:27:55.671 }, 00:27:55.671 "method": "bdev_nvme_attach_controller" 00:27:55.671 } 00:27:55.671 EOF 00:27:55.671 )") 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.671 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.671 { 00:27:55.671 "params": { 00:27:55.672 "name": "Nvme$subsystem", 00:27:55.672 "trtype": "$TEST_TRANSPORT", 00:27:55.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "$NVMF_PORT", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.672 "hdgst": ${hdgst:-false}, 00:27:55.672 "ddgst": ${ddgst:-false} 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 } 00:27:55.672 EOF 00:27:55.672 )") 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.672 { 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme$subsystem", 00:27:55.672 "trtype": "$TEST_TRANSPORT", 00:27:55.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "$NVMF_PORT", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.672 "hdgst": ${hdgst:-false}, 00:27:55.672 "ddgst": ${ddgst:-false} 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 } 00:27:55.672 EOF 00:27:55.672 )") 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:55.672 11:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme1", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme2", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme3", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme4", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme5", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme6", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme7", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme8", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme9", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme10", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 }' 00:27:55.672 [2024-07-11 11:14:09.892113] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:27:55.672 [2024-07-11 11:14:09.892186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:55.672 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.672 [2024-07-11 11:14:09.955817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.672 [2024-07-11 11:14:10.052078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 332223 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:57.573 11:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:58.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 332223 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 332043 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.507 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.507 { 00:27:58.507 "params": { 00:27:58.507 "name": "Nvme$subsystem", 00:27:58.507 "trtype": "$TEST_TRANSPORT", 00:27:58.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.507 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.508 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.508 { 00:27:58.508 "params": { 00:27:58.508 "name": "Nvme$subsystem", 00:27:58.508 "trtype": "$TEST_TRANSPORT", 00:27:58.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "$NVMF_PORT", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.508 "hdgst": ${hdgst:-false}, 00:27:58.508 "ddgst": ${ddgst:-false} 00:27:58.508 }, 00:27:58.508 "method": "bdev_nvme_attach_controller" 00:27:58.508 } 00:27:58.508 EOF 00:27:58.508 )") 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.509 { 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme$subsystem", 00:27:58.509 "trtype": "$TEST_TRANSPORT", 00:27:58.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "$NVMF_PORT", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.509 "hdgst": ${hdgst:-false}, 00:27:58.509 "ddgst": ${ddgst:-false} 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 } 00:27:58.509 EOF 00:27:58.509 )") 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:58.509 11:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme1", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme2", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme3", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme4", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme5", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme6", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme7", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme8", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme9", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 },{ 00:27:58.509 "params": { 00:27:58.509 "name": "Nvme10", 00:27:58.509 "trtype": "tcp", 00:27:58.509 "traddr": "10.0.0.2", 00:27:58.509 "adrfam": "ipv4", 00:27:58.509 "trsvcid": "4420", 00:27:58.509 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:58.509 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:58.509 "hdgst": false, 00:27:58.509 "ddgst": false 00:27:58.509 }, 00:27:58.509 "method": "bdev_nvme_attach_controller" 00:27:58.509 }' 00:27:58.509 [2024-07-11 11:14:12.917015] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:27:58.509 [2024-07-11 11:14:12.917133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332640 ] 00:27:58.769 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.769 [2024-07-11 11:14:12.982222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.769 [2024-07-11 11:14:13.072743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.668 Running I/O for 1 seconds... 00:28:01.599 00:28:01.599 Latency(us) 00:28:01.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.599 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme1n1 : 1.14 227.11 14.19 0.00 0.00 277385.69 7281.78 260978.92 00:28:01.599 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme2n1 : 1.13 226.61 14.16 0.00 0.00 269503.72 18641.35 256318.58 00:28:01.599 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme3n1 : 1.14 224.06 14.00 0.00 0.00 273595.92 15825.73 260978.92 00:28:01.599 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme4n1 : 1.17 274.24 17.14 0.00 0.00 218737.02 15825.73 262532.36 00:28:01.599 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme5n1 : 1.16 220.76 13.80 0.00 0.00 268763.40 21359.88 257872.02 00:28:01.599 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme6n1 : 1.16 220.22 13.76 0.00 0.00 265052.92 19418.07 260978.92 00:28:01.599 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme7n1 : 1.15 222.10 13.88 0.00 0.00 258047.43 21165.70 259425.47 00:28:01.599 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme8n1 : 1.18 271.24 16.95 0.00 0.00 208383.54 16214.09 245444.46 00:28:01.599 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme9n1 : 1.17 218.17 13.64 0.00 0.00 254516.72 21942.42 273406.48 00:28:01.599 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.599 Verification LBA range: start 0x0 length 0x400 00:28:01.599 Nvme10n1 : 1.18 221.27 13.83 0.00 0.00 245868.01 4951.61 281173.71 00:28:01.599 =================================================================================================================== 00:28:01.599 Total : 2325.77 145.36 0.00 0.00 252088.82 4951.61 281173.71 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.855 rmmod nvme_tcp 00:28:01.855 rmmod nvme_fabrics 00:28:01.855 rmmod nvme_keyring 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 332043 ']' 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 332043 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 332043 ']' 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 332043 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 332043 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 332043' 00:28:01.855 killing process with pid 332043 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 332043 00:28:01.855 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 332043 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.422 11:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.330 00:28:04.330 real 0m11.925s 00:28:04.330 user 0m34.565s 00:28:04.330 sys 0m3.262s 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:04.330 ************************************ 00:28:04.330 END TEST nvmf_shutdown_tc1 00:28:04.330 ************************************ 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:04.330 ************************************ 00:28:04.330 START TEST nvmf_shutdown_tc2 00:28:04.330 ************************************ 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:04.330 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:04.330 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.330 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:04.331 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:04.331 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.331 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.589 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:04.589 00:28:04.590 --- 10.0.0.2 ping statistics --- 00:28:04.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.590 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:28:04.590 00:28:04.590 --- 10.0.0.1 ping statistics --- 00:28:04.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.590 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=333399 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 333399 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 333399 ']' 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.590 11:14:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.590 [2024-07-11 11:14:18.953448] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:04.590 [2024-07-11 11:14:18.953548] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.590 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.848 [2024-07-11 11:14:19.018838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.848 [2024-07-11 11:14:19.098988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.848 [2024-07-11 11:14:19.099044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.848 [2024-07-11 11:14:19.099068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.848 [2024-07-11 11:14:19.099078] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.848 [2024-07-11 11:14:19.099088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.848 [2024-07-11 11:14:19.099183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.848 [2024-07-11 11:14:19.099244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.848 [2024-07-11 11:14:19.099311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:04.848 [2024-07-11 11:14:19.099313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.848 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.849 [2024-07-11 11:14:19.247642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.849 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.107 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.107 Malloc1 00:28:05.107 [2024-07-11 11:14:19.331091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.107 Malloc2 00:28:05.107 Malloc3 00:28:05.107 Malloc4 00:28:05.107 Malloc5 00:28:05.366 Malloc6 00:28:05.366 Malloc7 00:28:05.366 Malloc8 00:28:05.366 Malloc9 00:28:05.366 Malloc10 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=333582 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 333582 /var/tmp/bdevperf.sock 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 333582 ']' 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.366 { 00:28:05.366 "params": { 00:28:05.366 "name": "Nvme$subsystem", 00:28:05.366 "trtype": "$TEST_TRANSPORT", 00:28:05.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.366 "adrfam": "ipv4", 00:28:05.366 "trsvcid": "$NVMF_PORT", 00:28:05.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.366 "hdgst": ${hdgst:-false}, 00:28:05.366 "ddgst": ${ddgst:-false} 00:28:05.366 }, 00:28:05.366 "method": "bdev_nvme_attach_controller" 00:28:05.366 } 00:28:05.366 EOF 00:28:05.366 )") 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.366 { 00:28:05.366 "params": { 00:28:05.366 "name": "Nvme$subsystem", 00:28:05.366 "trtype": "$TEST_TRANSPORT", 00:28:05.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.366 "adrfam": "ipv4", 00:28:05.366 "trsvcid": "$NVMF_PORT", 00:28:05.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.366 "hdgst": ${hdgst:-false}, 00:28:05.366 "ddgst": ${ddgst:-false} 00:28:05.366 }, 00:28:05.366 "method": "bdev_nvme_attach_controller" 00:28:05.366 } 00:28:05.366 EOF 00:28:05.366 )") 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.366 { 00:28:05.366 "params": { 00:28:05.366 "name": "Nvme$subsystem", 00:28:05.366 "trtype": "$TEST_TRANSPORT", 00:28:05.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.366 "adrfam": "ipv4", 00:28:05.366 "trsvcid": "$NVMF_PORT", 00:28:05.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.366 "hdgst": ${hdgst:-false}, 00:28:05.366 "ddgst": ${ddgst:-false} 00:28:05.366 }, 00:28:05.366 "method": "bdev_nvme_attach_controller" 00:28:05.366 } 00:28:05.366 EOF 00:28:05.366 )") 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.366 { 00:28:05.366 "params": { 00:28:05.366 "name": "Nvme$subsystem", 00:28:05.366 "trtype": "$TEST_TRANSPORT", 00:28:05.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.366 "adrfam": "ipv4", 00:28:05.366 "trsvcid": "$NVMF_PORT", 00:28:05.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.366 "hdgst": ${hdgst:-false}, 00:28:05.366 "ddgst": ${ddgst:-false} 00:28:05.366 }, 00:28:05.366 "method": "bdev_nvme_attach_controller" 00:28:05.366 } 00:28:05.366 EOF 00:28:05.366 )") 00:28:05.366 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.624 { 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme$subsystem", 00:28:05.624 "trtype": "$TEST_TRANSPORT", 00:28:05.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "$NVMF_PORT", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.624 "hdgst": ${hdgst:-false}, 00:28:05.624 "ddgst": ${ddgst:-false} 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 } 00:28:05.624 EOF 00:28:05.624 )") 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.624 { 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme$subsystem", 00:28:05.624 "trtype": "$TEST_TRANSPORT", 00:28:05.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "$NVMF_PORT", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.624 "hdgst": ${hdgst:-false}, 00:28:05.624 "ddgst": ${ddgst:-false} 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 } 00:28:05.624 EOF 00:28:05.624 )") 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.624 { 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme$subsystem", 00:28:05.624 "trtype": "$TEST_TRANSPORT", 00:28:05.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "$NVMF_PORT", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.624 "hdgst": ${hdgst:-false}, 00:28:05.624 "ddgst": ${ddgst:-false} 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 } 00:28:05.624 EOF 00:28:05.624 )") 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.624 { 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme$subsystem", 00:28:05.624 "trtype": "$TEST_TRANSPORT", 00:28:05.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "$NVMF_PORT", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.624 "hdgst": ${hdgst:-false}, 00:28:05.624 "ddgst": ${ddgst:-false} 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 } 00:28:05.624 EOF 00:28:05.624 )") 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.624 { 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme$subsystem", 00:28:05.624 "trtype": "$TEST_TRANSPORT", 00:28:05.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "$NVMF_PORT", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.624 "hdgst": ${hdgst:-false}, 00:28:05.624 "ddgst": ${ddgst:-false} 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 } 00:28:05.624 EOF 00:28:05.624 )") 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.624 { 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme$subsystem", 00:28:05.624 "trtype": "$TEST_TRANSPORT", 00:28:05.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "$NVMF_PORT", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.624 "hdgst": ${hdgst:-false}, 00:28:05.624 "ddgst": ${ddgst:-false} 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 } 00:28:05.624 EOF 00:28:05.624 )") 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:05.624 11:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme1", 00:28:05.624 "trtype": "tcp", 00:28:05.624 "traddr": "10.0.0.2", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "4420", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.624 "hdgst": false, 00:28:05.624 "ddgst": false 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 },{ 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme2", 00:28:05.624 "trtype": "tcp", 00:28:05.624 "traddr": "10.0.0.2", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "4420", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:05.624 "hdgst": false, 00:28:05.624 "ddgst": false 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 },{ 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme3", 00:28:05.624 "trtype": "tcp", 00:28:05.624 "traddr": "10.0.0.2", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "4420", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:05.624 "hdgst": false, 00:28:05.624 "ddgst": false 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 },{ 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme4", 00:28:05.624 "trtype": "tcp", 00:28:05.624 "traddr": "10.0.0.2", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "4420", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:05.624 "hdgst": false, 00:28:05.624 "ddgst": false 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 },{ 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme5", 00:28:05.624 "trtype": "tcp", 00:28:05.624 "traddr": "10.0.0.2", 00:28:05.624 "adrfam": "ipv4", 00:28:05.624 "trsvcid": "4420", 00:28:05.624 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:05.624 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:05.624 "hdgst": false, 00:28:05.624 "ddgst": false 00:28:05.624 }, 00:28:05.624 "method": "bdev_nvme_attach_controller" 00:28:05.624 },{ 00:28:05.624 "params": { 00:28:05.624 "name": "Nvme6", 00:28:05.625 "trtype": "tcp", 00:28:05.625 "traddr": "10.0.0.2", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "4420", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:05.625 "hdgst": false, 00:28:05.625 "ddgst": false 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 },{ 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme7", 00:28:05.625 "trtype": "tcp", 00:28:05.625 "traddr": "10.0.0.2", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "4420", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:05.625 "hdgst": false, 00:28:05.625 "ddgst": false 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 },{ 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme8", 00:28:05.625 "trtype": "tcp", 00:28:05.625 "traddr": "10.0.0.2", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "4420", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:05.625 "hdgst": false, 00:28:05.625 "ddgst": false 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 },{ 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme9", 00:28:05.625 "trtype": "tcp", 00:28:05.625 "traddr": "10.0.0.2", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "4420", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:05.625 "hdgst": false, 00:28:05.625 "ddgst": false 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 },{ 00:28:05.625 "params": { 00:28:05.625 "name": "Nvme10", 00:28:05.625 "trtype": "tcp", 00:28:05.625 "traddr": "10.0.0.2", 00:28:05.625 "adrfam": "ipv4", 00:28:05.625 "trsvcid": "4420", 00:28:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:05.625 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:05.625 "hdgst": false, 00:28:05.625 "ddgst": false 00:28:05.625 }, 00:28:05.625 "method": "bdev_nvme_attach_controller" 00:28:05.625 }' 00:28:05.625 [2024-07-11 11:14:19.819136] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:05.625 [2024-07-11 11:14:19.819226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333582 ] 00:28:05.625 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.625 [2024-07-11 11:14:19.882390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.625 [2024-07-11 11:14:19.969134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.996 Running I/O for 10 seconds... 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:07.561 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 333582 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 333582 ']' 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 333582 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333582 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333582' 00:28:07.562 killing process with pid 333582 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 333582 00:28:07.562 11:14:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 333582 00:28:07.819 Received shutdown signal, test time was about 0.756674 seconds 00:28:07.819 00:28:07.819 Latency(us) 00:28:07.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.819 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme1n1 : 0.73 261.28 16.33 0.00 0.00 241210.15 16214.09 253211.69 00:28:07.819 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme2n1 : 0.72 178.45 11.15 0.00 0.00 343989.67 24175.50 282727.16 00:28:07.819 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme3n1 : 0.75 256.71 16.04 0.00 0.00 233125.61 26991.12 215928.98 00:28:07.819 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme4n1 : 0.72 264.92 16.56 0.00 0.00 218676.53 5364.24 256318.58 00:28:07.819 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme5n1 : 0.71 187.87 11.74 0.00 0.00 292875.47 10534.31 257872.02 00:28:07.819 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme6n1 : 0.75 257.49 16.09 0.00 0.00 214315.87 35535.08 176316.11 00:28:07.819 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme7n1 : 0.75 254.90 15.93 0.00 0.00 211076.49 30680.56 195734.19 00:28:07.819 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme8n1 : 0.71 181.49 11.34 0.00 0.00 283919.55 20388.98 251658.24 00:28:07.819 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme9n1 : 0.74 258.77 16.17 0.00 0.00 195502.59 32622.36 212822.09 00:28:07.819 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.819 Verification LBA range: start 0x0 length 0x400 00:28:07.819 Nvme10n1 : 0.76 254.03 15.88 0.00 0.00 194181.75 26991.12 228356.55 00:28:07.819 =================================================================================================================== 00:28:07.820 Total : 2355.91 147.24 0.00 0.00 235936.47 5364.24 282727.16 00:28:07.820 11:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 333399 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.191 rmmod nvme_tcp 00:28:09.191 rmmod nvme_fabrics 00:28:09.191 rmmod nvme_keyring 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 333399 ']' 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 333399 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 333399 ']' 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 333399 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333399 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333399' 00:28:09.191 killing process with pid 333399 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 333399 00:28:09.191 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 333399 00:28:09.448 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.448 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.448 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.448 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.449 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.449 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.449 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.449 11:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.980 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.980 00:28:11.980 real 0m7.081s 00:28:11.980 user 0m20.551s 00:28:11.980 sys 0m1.352s 00:28:11.980 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.980 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.980 ************************************ 00:28:11.980 END TEST nvmf_shutdown_tc2 00:28:11.980 ************************************ 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:11.981 ************************************ 00:28:11.981 START TEST nvmf_shutdown_tc3 00:28:11.981 ************************************ 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.981 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:11.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:28:11.981 00:28:11.981 --- 10.0.0.2 ping statistics --- 00:28:11.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.982 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:28:11.982 11:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:28:11.982 00:28:11.982 --- 10.0.0.1 ping statistics --- 00:28:11.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.982 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=334355 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 334355 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 334355 ']' 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.982 [2024-07-11 11:14:26.082828] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:11.982 [2024-07-11 11:14:26.082917] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.982 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.982 [2024-07-11 11:14:26.147029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.982 [2024-07-11 11:14:26.236231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.982 [2024-07-11 11:14:26.236289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.982 [2024-07-11 11:14:26.236302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.982 [2024-07-11 11:14:26.236313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.982 [2024-07-11 11:14:26.236323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.982 [2024-07-11 11:14:26.236409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.982 [2024-07-11 11:14:26.236475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.982 [2024-07-11 11:14:26.236537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:11.982 [2024-07-11 11:14:26.236539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.982 [2024-07-11 11:14:26.384557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.982 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.242 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.242 Malloc1 00:28:12.242 [2024-07-11 11:14:26.469639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.242 Malloc2 00:28:12.242 Malloc3 00:28:12.242 Malloc4 00:28:12.242 Malloc5 00:28:12.501 Malloc6 00:28:12.501 Malloc7 00:28:12.501 Malloc8 00:28:12.501 Malloc9 00:28:12.501 Malloc10 00:28:12.501 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.501 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:12.501 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.501 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=334532 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 334532 /var/tmp/bdevperf.sock 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 334532 ']' 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:12.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.760 "hdgst": ${hdgst:-false}, 00:28:12.760 "ddgst": ${ddgst:-false} 00:28:12.760 }, 00:28:12.760 "method": "bdev_nvme_attach_controller" 00:28:12.760 } 00:28:12.760 EOF 00:28:12.760 )") 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.760 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.760 { 00:28:12.760 "params": { 00:28:12.760 "name": "Nvme$subsystem", 00:28:12.760 "trtype": "$TEST_TRANSPORT", 00:28:12.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.760 "adrfam": "ipv4", 00:28:12.760 "trsvcid": "$NVMF_PORT", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.761 "hdgst": ${hdgst:-false}, 00:28:12.761 "ddgst": ${ddgst:-false} 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 } 00:28:12.761 EOF 00:28:12.761 )") 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.761 { 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme$subsystem", 00:28:12.761 "trtype": "$TEST_TRANSPORT", 00:28:12.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "$NVMF_PORT", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.761 "hdgst": ${hdgst:-false}, 00:28:12.761 "ddgst": ${ddgst:-false} 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 } 00:28:12.761 EOF 00:28:12.761 )") 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.761 { 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme$subsystem", 00:28:12.761 "trtype": "$TEST_TRANSPORT", 00:28:12.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "$NVMF_PORT", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.761 "hdgst": ${hdgst:-false}, 00:28:12.761 "ddgst": ${ddgst:-false} 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 } 00:28:12.761 EOF 00:28:12.761 )") 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:12.761 11:14:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme1", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme2", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme3", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme4", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme5", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme6", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme7", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme8", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme9", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 },{ 00:28:12.761 "params": { 00:28:12.761 "name": "Nvme10", 00:28:12.761 "trtype": "tcp", 00:28:12.761 "traddr": "10.0.0.2", 00:28:12.761 "adrfam": "ipv4", 00:28:12.761 "trsvcid": "4420", 00:28:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:12.761 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:12.761 "hdgst": false, 00:28:12.761 "ddgst": false 00:28:12.761 }, 00:28:12.761 "method": "bdev_nvme_attach_controller" 00:28:12.761 }' 00:28:12.761 [2024-07-11 11:14:26.985965] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:12.761 [2024-07-11 11:14:26.986057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334532 ] 00:28:12.761 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.761 [2024-07-11 11:14:27.048503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.761 [2024-07-11 11:14:27.135252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.660 Running I/O for 10 seconds... 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:14.918 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:15.176 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:15.176 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.176 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.176 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.177 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.177 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.177 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.177 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:15.177 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:15.177 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 334355 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 334355 ']' 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 334355 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334355 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334355' 00:28:15.454 killing process with pid 334355 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 334355 00:28:15.454 11:14:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 334355 00:28:15.454 [2024-07-11 11:14:29.781903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2af0 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.782985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.783864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3c30 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.785381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.785405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.785420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.785432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.785443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.454 [2024-07-11 11:14:29.785456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.785988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.786220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac2f90 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.455 [2024-07-11 11:14:29.788473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.788874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3430 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.789222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa370 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.789472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448350 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.789647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe71ee0 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.789833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.456 [2024-07-11 11:14:29.789940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.789953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa8c0 is same with the state(5) to be set 00:28:15.456 [2024-07-11 11:14:29.790013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.456 [2024-07-11 11:14:29.790351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.456 [2024-07-11 11:14:29.790368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with [2024-07-11 11:14:29.790583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1the state(5) to be set 00:28:15.457 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with [2024-07-11 11:14:29.790599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:15.457 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1[2024-07-11 11:14:29.790660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with [2024-07-11 11:14:29.790674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:15.457 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.790787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.790940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.790953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-07-11 11:14:29.790966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.790982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.790999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.791011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.791024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-07-11 11:14:29.791046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.791060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.791089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.791113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.791126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.457 [2024-07-11 11:14:29.791139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.457 [2024-07-11 11:14:29.791152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.457 [2024-07-11 11:14:29.791163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.791176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.791241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-07-11 11:14:29.791295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.791309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.791376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-07-11 11:14:29.791458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac38f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 the state(5) to be set 00:28:15.458 [2024-07-11 11:14:29.791473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.458 [2024-07-11 11:14:29.791890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.458 [2024-07-11 11:14:29.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.791920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.791936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.791950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.791966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.791980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.791996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.792010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.792026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.792040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.792061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.792075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.792106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.459 [2024-07-11 11:14:29.792136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.459 [2024-07-11 11:14:29.792223] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe750e0 was disconnected and freed. reset controller. 00:28:15.459 [2024-07-11 11:14:29.792549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.792995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.793460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3d90 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.794927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.794955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.794985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.794998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.459 [2024-07-11 11:14:29.795093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.795990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.796018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.796034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dd20 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.796169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:15.460 [2024-07-11 11:14:29.796216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe71ee0 (9): Bad file descriptor 00:28:15.460 [2024-07-11 11:14:29.797324] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.460 [2024-07-11 11:14:29.797810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.797995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798039] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.460 [2024-07-11 11:14:29.798048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.460 [2024-07-11 11:14:29.798229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.460 [2024-07-11 11:14:29.798249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe71ee0 with addr=10.0.0.2, port=4420 00:28:15.460 [2024-07-11 11:14:29.798254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe71ee0 is same w[2024-07-11 11:14:29.798273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with ith the state(5) to be set 00:28:15.461 the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798379] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.461 [2024-07-11 11:14:29.798396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with [2024-07-11 11:14:29.798441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:12the state(5) to be set 00:28:15.461 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 11:14:29.798511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with [2024-07-11 11:14:29.798548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:15.461 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e1c0 is same with the state(5) to be set 00:28:15.461 [2024-07-11 11:14:29.798647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.798971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.798986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.461 [2024-07-11 11:14:29.799377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.461 [2024-07-11 11:14:29.799393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.799981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.799997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e680 is same with the state(5) to be set 00:28:15.462 [2024-07-11 11:14:29.800230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e680 is same with the state(5) to be set 00:28:15.462 [2024-07-11 11:14:29.800245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e680 is same with the state(5) to be set 00:28:15.462 [2024-07-11 11:14:29.800262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e680 is same with the state(5) to be set 00:28:15.462 [2024-07-11 11:14:29.800276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e680 is same with the state(5) to be set 00:28:15.462 [2024-07-11 11:14:29.800293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.462 [2024-07-11 11:14:29.800485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-11 11:14:29.800500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.800516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.463 [2024-07-11 11:14:29.800529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.800545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142d550 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800621] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x142d550 was disconnected and freed. reset controller. 00:28:15.463 [2024-07-11 11:14:29.800633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.800995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d3790 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe71ee0 (9): Bad file descriptor 00:28:15.463 [2024-07-11 11:14:29.801558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa370 (9): Bad file descriptor 00:28:15.463 [2024-07-11 11:14:29.801611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cc40 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.801926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450030 is same with the state(5) to be set 00:28:15.463 [2024-07-11 11:14:29.801973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-11 11:14:29.801994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.463 [2024-07-11 11:14:29.802010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8b10 is same with the state(5) to be set 00:28:15.464 [2024-07-11 11:14:29.802148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0610 is same with the state(5) to be set 00:28:15.464 [2024-07-11 11:14:29.802322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1490 is same with the state(5) to be set 00:28:15.464 [2024-07-11 11:14:29.802487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-11 11:14:29.802599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.802613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ca3d0 is same with the state(5) to be set 00:28:15.464 [2024-07-11 11:14:29.802644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448350 (9): Bad file descriptor 00:28:15.464 [2024-07-11 11:14:29.802678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa8c0 (9): Bad file descriptor 00:28:15.464 [2024-07-11 11:14:29.803937] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.464 [2024-07-11 11:14:29.804157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.464 [2024-07-11 11:14:29.804953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-11 11:14:29.804968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.804984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.804998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.805526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.805540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.817850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.817914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.817933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.817949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.817965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.817979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.817996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-11 11:14:29.818556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.465 [2024-07-11 11:14:29.818709] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1280ec0 was disconnected and freed. reset controller. 00:28:15.465 [2024-07-11 11:14:29.818916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:15.465 [2024-07-11 11:14:29.818969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8b10 (9): Bad file descriptor 00:28:15.465 [2024-07-11 11:14:29.818997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:15.465 [2024-07-11 11:14:29.819013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:15.465 [2024-07-11 11:14:29.819031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:15.465 [2024-07-11 11:14:29.819097] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.465 [2024-07-11 11:14:29.819130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143cc40 (9): Bad file descriptor 00:28:15.466 [2024-07-11 11:14:29.819156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450030 (9): Bad file descriptor 00:28:15.466 [2024-07-11 11:14:29.819182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0610 (9): Bad file descriptor 00:28:15.466 [2024-07-11 11:14:29.819213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d1490 (9): Bad file descriptor 00:28:15.466 [2024-07-11 11:14:29.819243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ca3d0 (9): Bad file descriptor 00:28:15.466 [2024-07-11 11:14:29.820730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.466 [2024-07-11 11:14:29.820837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.820861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.820883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.820900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.820917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.820932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.820949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.820964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.820980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-11 11:14:29.821934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.466 [2024-07-11 11:14:29.821950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.821964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.821981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.821995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.822885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.822901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe762e0 is same with the state(5) to be set 00:28:15.467 [2024-07-11 11:14:29.824135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-11 11:14:29.824530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-11 11:14:29.824544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.824980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.824995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-11 11:14:29.825885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.468 [2024-07-11 11:14:29.825901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.825915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.825932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.825946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.825963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.825977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.825994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.826008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.826025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.826039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.826055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.826070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.826086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.826100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.826117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.826131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.826147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.826162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.826177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127bd20 is same with the state(5) to be set 00:28:15.469 [2024-07-11 11:14:29.827763] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.469 [2024-07-11 11:14:29.827999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.828982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.828997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.829013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.829028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.829044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.469 [2024-07-11 11:14:29.829058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.469 [2024-07-11 11:14:29.829074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.829971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.829985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.830001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.830016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.830036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.470 [2024-07-11 11:14:29.830051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.470 [2024-07-11 11:14:29.830067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1283740 is same with the state(5) to be set 00:28:15.470 [2024-07-11 11:14:29.831690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:15.470 [2024-07-11 11:14:29.831722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:15.470 [2024-07-11 11:14:29.831742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:15.470 [2024-07-11 11:14:29.831973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.470 [2024-07-11 11:14:29.832003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a8b10 with addr=10.0.0.2, port=4420 00:28:15.470 [2024-07-11 11:14:29.832021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8b10 is same with the state(5) to be set 00:28:15.470 [2024-07-11 11:14:29.832103] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.470 [2024-07-11 11:14:29.832137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8b10 (9): Bad file descriptor 00:28:15.470 [2024-07-11 11:14:29.832307] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:15.470 [2024-07-11 11:14:29.832676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:15.470 [2024-07-11 11:14:29.832831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.470 [2024-07-11 11:14:29.832859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aa8c0 with addr=10.0.0.2, port=4420 00:28:15.470 [2024-07-11 11:14:29.832877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa8c0 is same with the state(5) to be set 00:28:15.471 [2024-07-11 11:14:29.832968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.471 [2024-07-11 11:14:29.832993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aa370 with addr=10.0.0.2, port=4420 00:28:15.471 [2024-07-11 11:14:29.833009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa370 is same with the state(5) to be set 00:28:15.471 [2024-07-11 11:14:29.833104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.471 [2024-07-11 11:14:29.833128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143cc40 with addr=10.0.0.2, port=4420 00:28:15.471 [2024-07-11 11:14:29.833144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143cc40 is same with the state(5) to be set 00:28:15.471 [2024-07-11 11:14:29.833715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.833982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.833997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.834975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.834989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.471 [2024-07-11 11:14:29.835006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.471 [2024-07-11 11:14:29.835022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.835786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.835802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127d060 is same with the state(5) to be set 00:28:15.472 [2024-07-11 11:14:29.837055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.472 [2024-07-11 11:14:29.837629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.472 [2024-07-11 11:14:29.837646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.837970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.837985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.838944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.838959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127e510 is same with the state(5) to be set 00:28:15.473 [2024-07-11 11:14:29.840202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.840225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.473 [2024-07-11 11:14:29.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.473 [2024-07-11 11:14:29.840263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.840972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.840988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.474 [2024-07-11 11:14:29.841412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.474 [2024-07-11 11:14:29.841429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.841981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.841995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.842245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.842260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f9c0 is same with the state(5) to be set 00:28:15.475 [2024-07-11 11:14:29.843529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.843968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.843983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.844000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.844014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.844030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.475 [2024-07-11 11:14:29.844049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.475 [2024-07-11 11:14:29.844066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.844973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.844987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.476 [2024-07-11 11:14:29.845413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.476 [2024-07-11 11:14:29.845427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.477 [2024-07-11 11:14:29.845443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.477 [2024-07-11 11:14:29.845458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.477 [2024-07-11 11:14:29.845474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.477 [2024-07-11 11:14:29.845488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.477 [2024-07-11 11:14:29.845504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.477 [2024-07-11 11:14:29.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.477 [2024-07-11 11:14:29.852078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.477 [2024-07-11 11:14:29.852130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.477 [2024-07-11 11:14:29.852149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.477 [2024-07-11 11:14:29.852165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.477 [2024-07-11 11:14:29.854390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:15.477 [2024-07-11 11:14:29.854433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:15.477 [2024-07-11 11:14:29.854454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:15.477 [2024-07-11 11:14:29.854484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:15.737 task offset: 24576 on job bdev=Nvme1n1 fails 00:28:15.737 00:28:15.737 Latency(us) 00:28:15.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.737 Job: Nvme1n1 ended in about 0.88 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme1n1 : 0.88 218.61 13.66 72.87 0.00 216982.66 7184.69 246997.90 00:28:15.738 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme2n1 ended in about 0.91 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme2n1 : 0.91 141.07 8.82 70.54 0.00 292979.04 22524.97 260978.92 00:28:15.738 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme3n1 ended in about 0.91 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme3n1 : 0.91 140.57 8.79 70.28 0.00 287872.95 17379.18 282727.16 00:28:15.738 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme4n1 ended in about 0.89 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme4n1 : 0.89 220.91 13.81 72.13 0.00 202261.83 9951.76 259425.47 00:28:15.738 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme5n1 ended in about 0.92 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme5n1 : 0.92 139.10 8.69 69.55 0.00 278871.36 41360.50 234570.33 00:28:15.738 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme6n1 ended in about 0.92 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme6n1 : 0.92 144.04 9.00 63.90 0.00 272764.78 30680.56 246997.90 00:28:15.738 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme7n1 ended in about 0.93 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme7n1 : 0.93 138.13 8.63 69.07 0.00 268749.18 16505.36 256318.58 00:28:15.738 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme8n1 ended in about 0.90 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme8n1 : 0.90 212.40 13.27 70.80 0.00 191311.08 18350.08 256318.58 00:28:15.738 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme9n1 ended in about 0.94 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme9n1 : 0.94 136.67 8.54 68.33 0.00 260065.66 21456.97 256318.58 00:28:15.738 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.738 Job: Nvme10n1 ended in about 0.91 seconds with error 00:28:15.738 Verification LBA range: start 0x0 length 0x400 00:28:15.738 Nvme10n1 : 0.91 139.97 8.75 69.98 0.00 246810.30 22039.51 274959.93 00:28:15.738 =================================================================================================================== 00:28:15.738 Total : 1631.47 101.97 697.46 0.00 247386.11 7184.69 282727.16 00:28:15.738 [2024-07-11 11:14:29.881658] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:15.738 [2024-07-11 11:14:29.881987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.738 [2024-07-11 11:14:29.882024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1448350 with addr=10.0.0.2, port=4420 00:28:15.738 [2024-07-11 11:14:29.882046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448350 is same with the state(5) to be set 00:28:15.738 [2024-07-11 11:14:29.882076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa8c0 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.882114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa370 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.882134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143cc40 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.882152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.882166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.882183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:15.738 [2024-07-11 11:14:29.882246] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.882276] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.882297] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.882317] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.882337] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.882358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448350 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.882512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:15.738 [2024-07-11 11:14:29.882556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.738 [2024-07-11 11:14:29.882700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.738 [2024-07-11 11:14:29.882730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe71ee0 with addr=10.0.0.2, port=4420 00:28:15.738 [2024-07-11 11:14:29.882748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe71ee0 is same with the state(5) to be set 00:28:15.738 [2024-07-11 11:14:29.882847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.738 [2024-07-11 11:14:29.882873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d1490 with addr=10.0.0.2, port=4420 00:28:15.738 [2024-07-11 11:14:29.882890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1490 is same with the state(5) to be set 00:28:15.738 [2024-07-11 11:14:29.882991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.738 [2024-07-11 11:14:29.883018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ca3d0 with addr=10.0.0.2, port=4420 00:28:15.738 [2024-07-11 11:14:29.883034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ca3d0 is same with the state(5) to be set 00:28:15.738 [2024-07-11 11:14:29.883108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.738 [2024-07-11 11:14:29.883135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda0610 with addr=10.0.0.2, port=4420 00:28:15.738 [2024-07-11 11:14:29.883152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda0610 is same with the state(5) to be set 00:28:15.738 [2024-07-11 11:14:29.883170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.883185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.883198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:15.738 [2024-07-11 11:14:29.883219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.883234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.883253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:15.738 [2024-07-11 11:14:29.883272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.883287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.883301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:15.738 [2024-07-11 11:14:29.883343] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.883367] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.883389] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.883408] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.738 [2024-07-11 11:14:29.884487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.738 [2024-07-11 11:14:29.884512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.738 [2024-07-11 11:14:29.884525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.738 [2024-07-11 11:14:29.884612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.738 [2024-07-11 11:14:29.884638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1450030 with addr=10.0.0.2, port=4420 00:28:15.738 [2024-07-11 11:14:29.884655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450030 is same with the state(5) to be set 00:28:15.738 [2024-07-11 11:14:29.884674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe71ee0 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.884694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d1490 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.884712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ca3d0 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.884730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda0610 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.884746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.884783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.884799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:15.738 [2024-07-11 11:14:29.884874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:15.738 [2024-07-11 11:14:29.884900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.738 [2024-07-11 11:14:29.884927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450030 (9): Bad file descriptor 00:28:15.738 [2024-07-11 11:14:29.884948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.884962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.884976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:15.738 [2024-07-11 11:14:29.884994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.885008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.885021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:15.738 [2024-07-11 11:14:29.885043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.885058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.885073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:15.738 [2024-07-11 11:14:29.885089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:15.738 [2024-07-11 11:14:29.885103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:15.738 [2024-07-11 11:14:29.885117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:15.739 [2024-07-11 11:14:29.885186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.739 [2024-07-11 11:14:29.885206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.739 [2024-07-11 11:14:29.885220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.739 [2024-07-11 11:14:29.885232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.739 [2024-07-11 11:14:29.885325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.739 [2024-07-11 11:14:29.885352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a8b10 with addr=10.0.0.2, port=4420 00:28:15.739 [2024-07-11 11:14:29.885368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8b10 is same with the state(5) to be set 00:28:15.739 [2024-07-11 11:14:29.885384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:15.739 [2024-07-11 11:14:29.885397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:15.739 [2024-07-11 11:14:29.885410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:15.739 [2024-07-11 11:14:29.885451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.739 [2024-07-11 11:14:29.885474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8b10 (9): Bad file descriptor 00:28:15.739 [2024-07-11 11:14:29.885514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:15.739 [2024-07-11 11:14:29.885532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:15.739 [2024-07-11 11:14:29.885546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:15.739 [2024-07-11 11:14:29.885585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.998 11:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:15.998 11:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 334532 00:28:16.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (334532) - No such process 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.939 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.939 rmmod nvme_tcp 00:28:17.199 rmmod nvme_fabrics 00:28:17.199 rmmod nvme_keyring 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.199 11:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.102 00:28:19.102 real 0m7.591s 00:28:19.102 user 0m18.896s 00:28:19.102 sys 0m1.443s 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.102 ************************************ 00:28:19.102 END TEST nvmf_shutdown_tc3 00:28:19.102 ************************************ 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:19.102 00:28:19.102 real 0m26.827s 00:28:19.102 user 1m14.109s 00:28:19.102 sys 0m6.205s 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.102 11:14:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.102 ************************************ 00:28:19.102 END TEST nvmf_shutdown 00:28:19.102 ************************************ 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:19.102 11:14:33 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.102 11:14:33 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.102 11:14:33 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:19.102 11:14:33 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.102 11:14:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.372 ************************************ 00:28:19.372 START TEST nvmf_multicontroller 00:28:19.372 ************************************ 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:19.372 * Looking for test storage... 00:28:19.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.372 11:14:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.908 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.909 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.909 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:28:21.909 00:28:21.909 --- 10.0.0.2 ping statistics --- 00:28:21.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.909 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:21.909 00:28:21.909 --- 10.0.0.1 ping statistics --- 00:28:21.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.909 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=337051 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 337051 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 337051 ']' 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.909 11:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 [2024-07-11 11:14:35.935186] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:21.909 [2024-07-11 11:14:35.935267] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.909 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.909 [2024-07-11 11:14:35.998543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:21.909 [2024-07-11 11:14:36.087389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.909 [2024-07-11 11:14:36.087450] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.909 [2024-07-11 11:14:36.087464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.909 [2024-07-11 11:14:36.087475] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.909 [2024-07-11 11:14:36.087484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.909 [2024-07-11 11:14:36.087570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.909 [2024-07-11 11:14:36.087634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.909 [2024-07-11 11:14:36.087637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 [2024-07-11 11:14:36.232188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 Malloc0 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.909 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.910 [2024-07-11 11:14:36.295731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:21.910 [2024-07-11 11:14:36.303614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.910 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.168 Malloc1 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=337078 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 337078 /var/tmp/bdevperf.sock 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 337078 ']' 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.168 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 NVMe0n1 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.427 1 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 request: 00:28:22.427 { 00:28:22.427 "name": "NVMe0", 00:28:22.427 "trtype": "tcp", 00:28:22.427 "traddr": "10.0.0.2", 00:28:22.427 "adrfam": "ipv4", 00:28:22.427 "trsvcid": "4420", 00:28:22.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.427 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:22.427 "hostaddr": "10.0.0.2", 00:28:22.427 "hostsvcid": "60000", 00:28:22.427 "prchk_reftag": false, 00:28:22.427 "prchk_guard": false, 00:28:22.427 "hdgst": false, 00:28:22.427 "ddgst": false, 00:28:22.427 "method": "bdev_nvme_attach_controller", 00:28:22.427 "req_id": 1 00:28:22.427 } 00:28:22.427 Got JSON-RPC error response 00:28:22.427 response: 00:28:22.427 { 00:28:22.427 "code": -114, 00:28:22.427 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.427 } 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 request: 00:28:22.427 { 00:28:22.427 "name": "NVMe0", 00:28:22.427 "trtype": "tcp", 00:28:22.427 "traddr": "10.0.0.2", 00:28:22.427 "adrfam": "ipv4", 00:28:22.427 "trsvcid": "4420", 00:28:22.427 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:22.427 "hostaddr": "10.0.0.2", 00:28:22.427 "hostsvcid": "60000", 00:28:22.427 "prchk_reftag": false, 00:28:22.427 "prchk_guard": false, 00:28:22.427 "hdgst": false, 00:28:22.427 "ddgst": false, 00:28:22.427 "method": "bdev_nvme_attach_controller", 00:28:22.427 "req_id": 1 00:28:22.427 } 00:28:22.427 Got JSON-RPC error response 00:28:22.427 response: 00:28:22.427 { 00:28:22.427 "code": -114, 00:28:22.427 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.427 } 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.427 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 request: 00:28:22.427 { 00:28:22.427 "name": "NVMe0", 00:28:22.427 "trtype": "tcp", 00:28:22.427 "traddr": "10.0.0.2", 00:28:22.427 "adrfam": "ipv4", 00:28:22.427 "trsvcid": "4420", 00:28:22.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.427 "hostaddr": "10.0.0.2", 00:28:22.427 "hostsvcid": "60000", 00:28:22.427 "prchk_reftag": false, 00:28:22.427 "prchk_guard": false, 00:28:22.427 "hdgst": false, 00:28:22.427 "ddgst": false, 00:28:22.427 "multipath": "disable", 00:28:22.428 "method": "bdev_nvme_attach_controller", 00:28:22.428 "req_id": 1 00:28:22.428 } 00:28:22.428 Got JSON-RPC error response 00:28:22.428 response: 00:28:22.428 { 00:28:22.428 "code": -114, 00:28:22.428 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:22.428 } 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.428 request: 00:28:22.428 { 00:28:22.428 "name": "NVMe0", 00:28:22.428 "trtype": "tcp", 00:28:22.428 "traddr": "10.0.0.2", 00:28:22.428 "adrfam": "ipv4", 00:28:22.428 "trsvcid": "4420", 00:28:22.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.428 "hostaddr": "10.0.0.2", 00:28:22.428 "hostsvcid": "60000", 00:28:22.428 "prchk_reftag": false, 00:28:22.428 "prchk_guard": false, 00:28:22.428 "hdgst": false, 00:28:22.428 "ddgst": false, 00:28:22.428 "multipath": "failover", 00:28:22.428 "method": "bdev_nvme_attach_controller", 00:28:22.428 "req_id": 1 00:28:22.428 } 00:28:22.428 Got JSON-RPC error response 00:28:22.428 response: 00:28:22.428 { 00:28:22.428 "code": -114, 00:28:22.428 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:22.428 } 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.428 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.685 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.685 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:22.685 11:14:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:24.062 0 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 337078 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 337078 ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 337078 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 337078 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 337078' 00:28:24.062 killing process with pid 337078 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 337078 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 337078 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:24.062 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.062 [2024-07-11 11:14:36.406126] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:24.062 [2024-07-11 11:14:36.406208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337078 ] 00:28:24.062 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.062 [2024-07-11 11:14:36.465898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.062 [2024-07-11 11:14:36.552593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.062 [2024-07-11 11:14:36.975218] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 0f3e35d8-c6c6-4210-a97b-7ddd6de8575a already exists 00:28:24.062 [2024-07-11 11:14:36.975257] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:0f3e35d8-c6c6-4210-a97b-7ddd6de8575a alias for bdev NVMe1n1 00:28:24.062 [2024-07-11 11:14:36.975271] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:24.062 Running I/O for 1 seconds... 00:28:24.062 00:28:24.062 Latency(us) 00:28:24.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.062 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:24.062 NVMe0n1 : 1.00 18990.75 74.18 0.00 0.00 6730.23 4296.25 12913.02 00:28:24.062 =================================================================================================================== 00:28:24.062 Total : 18990.75 74.18 0.00 0.00 6730.23 4296.25 12913.02 00:28:24.062 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.062 00:28:24.062 Latency(us) 00:28:24.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.062 =================================================================================================================== 00:28:24.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.062 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.062 rmmod nvme_tcp 00:28:24.062 rmmod nvme_fabrics 00:28:24.062 rmmod nvme_keyring 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 337051 ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 337051 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 337051 ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 337051 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.062 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 337051 00:28:24.322 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.322 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.322 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 337051' 00:28:24.322 killing process with pid 337051 00:28:24.322 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 337051 00:28:24.322 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 337051 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.583 11:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.490 11:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.490 00:28:26.490 real 0m7.282s 00:28:26.490 user 0m10.753s 00:28:26.490 sys 0m2.343s 00:28:26.490 11:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.490 11:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:26.490 ************************************ 00:28:26.490 END TEST nvmf_multicontroller 00:28:26.490 ************************************ 00:28:26.490 11:14:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:26.490 11:14:40 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:26.490 11:14:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:26.490 11:14:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.490 11:14:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.490 ************************************ 00:28:26.490 START TEST nvmf_aer 00:28:26.490 ************************************ 00:28:26.490 11:14:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:26.749 * Looking for test storage... 00:28:26.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.749 11:14:40 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.750 11:14:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:28.653 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.654 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.654 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.654 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.654 11:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:28.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:28:28.654 00:28:28.654 --- 10.0.0.2 ping statistics --- 00:28:28.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.654 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:28.654 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:28.913 00:28:28.913 --- 10.0.0.1 ping statistics --- 00:28:28.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.913 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=339279 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 339279 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 339279 ']' 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.913 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 [2024-07-11 11:14:43.151629] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:28.913 [2024-07-11 11:14:43.151699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.913 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.913 [2024-07-11 11:14:43.212270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.913 [2024-07-11 11:14:43.297567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.913 [2024-07-11 11:14:43.297617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.913 [2024-07-11 11:14:43.297642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.913 [2024-07-11 11:14:43.297653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.913 [2024-07-11 11:14:43.297663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.913 [2024-07-11 11:14:43.297869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.913 [2024-07-11 11:14:43.297897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.913 [2024-07-11 11:14:43.297955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.913 [2024-07-11 11:14:43.297958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.173 [2024-07-11 11:14:43.450715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.173 Malloc0 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.173 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.174 [2024-07-11 11:14:43.502496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.174 [ 00:28:29.174 { 00:28:29.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:29.174 "subtype": "Discovery", 00:28:29.174 "listen_addresses": [], 00:28:29.174 "allow_any_host": true, 00:28:29.174 "hosts": [] 00:28:29.174 }, 00:28:29.174 { 00:28:29.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.174 "subtype": "NVMe", 00:28:29.174 "listen_addresses": [ 00:28:29.174 { 00:28:29.174 "trtype": "TCP", 00:28:29.174 "adrfam": "IPv4", 00:28:29.174 "traddr": "10.0.0.2", 00:28:29.174 "trsvcid": "4420" 00:28:29.174 } 00:28:29.174 ], 00:28:29.174 "allow_any_host": true, 00:28:29.174 "hosts": [], 00:28:29.174 "serial_number": "SPDK00000000000001", 00:28:29.174 "model_number": "SPDK bdev Controller", 00:28:29.174 "max_namespaces": 2, 00:28:29.174 "min_cntlid": 1, 00:28:29.174 "max_cntlid": 65519, 00:28:29.174 "namespaces": [ 00:28:29.174 { 00:28:29.174 "nsid": 1, 00:28:29.174 "bdev_name": "Malloc0", 00:28:29.174 "name": "Malloc0", 00:28:29.174 "nguid": "5AB542CEDBE742808BEE4B1A14654D40", 00:28:29.174 "uuid": "5ab542ce-dbe7-4280-8bee-4b1a14654d40" 00:28:29.174 } 00:28:29.174 ] 00:28:29.174 } 00:28:29.174 ] 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=339349 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:29.174 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:29.174 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.434 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.694 Malloc1 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.694 [ 00:28:29.694 { 00:28:29.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:29.694 "subtype": "Discovery", 00:28:29.694 "listen_addresses": [], 00:28:29.694 "allow_any_host": true, 00:28:29.694 "hosts": [] 00:28:29.694 }, 00:28:29.694 { 00:28:29.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.694 "subtype": "NVMe", 00:28:29.694 "listen_addresses": [ 00:28:29.694 { 00:28:29.694 "trtype": "TCP", 00:28:29.694 "adrfam": "IPv4", 00:28:29.694 "traddr": "10.0.0.2", 00:28:29.694 "trsvcid": "4420" 00:28:29.694 } 00:28:29.694 ], 00:28:29.694 "allow_any_host": true, 00:28:29.694 "hosts": [], 00:28:29.694 "serial_number": "SPDK00000000000001", 00:28:29.694 "model_number": "SPDK bdev Controller", 00:28:29.694 "max_namespaces": 2, 00:28:29.694 "min_cntlid": 1, 00:28:29.694 "max_cntlid": 65519, 00:28:29.694 "namespaces": [ 00:28:29.694 { 00:28:29.694 "nsid": 1, 00:28:29.694 "bdev_name": "Malloc0", 00:28:29.694 "name": "Malloc0", 00:28:29.694 "nguid": "5AB542CEDBE742808BEE4B1A14654D40", 00:28:29.694 "uuid": "5ab542ce-dbe7-4280-8bee-4b1a14654d40" 00:28:29.694 }, 00:28:29.694 { 00:28:29.694 "nsid": 2, 00:28:29.694 "bdev_name": "Malloc1", 00:28:29.694 "name": "Malloc1", 00:28:29.694 "nguid": "681E5ACAC0E644438675994233A4D2D6", 00:28:29.694 "uuid": "681e5aca-c0e6-4443-8675-994233a4d2d6" 00:28:29.694 } 00:28:29.694 ] 00:28:29.694 } 00:28:29.694 ] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 339349 00:28:29.694 Asynchronous Event Request test 00:28:29.694 Attaching to 10.0.0.2 00:28:29.694 Attached to 10.0.0.2 00:28:29.694 Registering asynchronous event callbacks... 00:28:29.694 Starting namespace attribute notice tests for all controllers... 00:28:29.694 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:29.694 aer_cb - Changed Namespace 00:28:29.694 Cleaning up... 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.694 11:14:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:29.694 rmmod nvme_tcp 00:28:29.694 rmmod nvme_fabrics 00:28:29.694 rmmod nvme_keyring 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 339279 ']' 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 339279 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 339279 ']' 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 339279 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.694 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 339279 00:28:29.695 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.695 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.695 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 339279' 00:28:29.695 killing process with pid 339279 00:28:29.695 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 339279 00:28:29.695 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 339279 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.955 11:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.489 11:14:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.489 00:28:32.489 real 0m5.463s 00:28:32.489 user 0m4.579s 00:28:32.489 sys 0m1.940s 00:28:32.489 11:14:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:32.489 11:14:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:32.489 ************************************ 00:28:32.489 END TEST nvmf_aer 00:28:32.489 ************************************ 00:28:32.489 11:14:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:32.489 11:14:46 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:32.489 11:14:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:32.489 11:14:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.489 11:14:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:32.489 ************************************ 00:28:32.489 START TEST nvmf_async_init 00:28:32.489 ************************************ 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:32.489 * Looking for test storage... 00:28:32.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.489 11:14:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=85fdba13c8904a91a9c20b087ef3ebdf 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.490 11:14:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:34.391 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:34.391 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.391 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:34.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:34.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:28:34.392 00:28:34.392 --- 10.0.0.2 ping statistics --- 00:28:34.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.392 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:34.392 00:28:34.392 --- 10.0.0.1 ping statistics --- 00:28:34.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.392 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=341361 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 341361 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 341361 ']' 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.392 11:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.392 [2024-07-11 11:14:48.799977] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:34.392 [2024-07-11 11:14:48.800065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.651 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.651 [2024-07-11 11:14:48.866356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.651 [2024-07-11 11:14:48.957636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.651 [2024-07-11 11:14:48.957689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.651 [2024-07-11 11:14:48.957703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.651 [2024-07-11 11:14:48.957715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.651 [2024-07-11 11:14:48.957724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.651 [2024-07-11 11:14:48.957776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.651 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.651 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:34.651 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:34.651 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:34.651 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 [2024-07-11 11:14:49.101567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 null0 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 85fdba13c8904a91a9c20b087ef3ebdf 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.911 [2024-07-11 11:14:49.141858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.911 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.170 nvme0n1 00:28:35.170 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.170 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.170 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.170 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.170 [ 00:28:35.170 { 00:28:35.170 "name": "nvme0n1", 00:28:35.170 "aliases": [ 00:28:35.170 "85fdba13-c890-4a91-a9c2-0b087ef3ebdf" 00:28:35.170 ], 00:28:35.170 "product_name": "NVMe disk", 00:28:35.170 "block_size": 512, 00:28:35.170 "num_blocks": 2097152, 00:28:35.170 "uuid": "85fdba13-c890-4a91-a9c2-0b087ef3ebdf", 00:28:35.170 "assigned_rate_limits": { 00:28:35.170 "rw_ios_per_sec": 0, 00:28:35.170 "rw_mbytes_per_sec": 0, 00:28:35.170 "r_mbytes_per_sec": 0, 00:28:35.170 "w_mbytes_per_sec": 0 00:28:35.170 }, 00:28:35.170 "claimed": false, 00:28:35.170 "zoned": false, 00:28:35.170 "supported_io_types": { 00:28:35.170 "read": true, 00:28:35.170 "write": true, 00:28:35.170 "unmap": false, 00:28:35.170 "flush": true, 00:28:35.170 "reset": true, 00:28:35.170 "nvme_admin": true, 00:28:35.170 "nvme_io": true, 00:28:35.170 "nvme_io_md": false, 00:28:35.170 "write_zeroes": true, 00:28:35.170 "zcopy": false, 00:28:35.170 "get_zone_info": false, 00:28:35.170 "zone_management": false, 00:28:35.170 "zone_append": false, 00:28:35.170 "compare": true, 00:28:35.170 "compare_and_write": true, 00:28:35.170 "abort": true, 00:28:35.170 "seek_hole": false, 00:28:35.170 "seek_data": false, 00:28:35.170 "copy": true, 00:28:35.170 "nvme_iov_md": false 00:28:35.170 }, 00:28:35.170 "memory_domains": [ 00:28:35.170 { 00:28:35.170 "dma_device_id": "system", 00:28:35.170 "dma_device_type": 1 00:28:35.170 } 00:28:35.170 ], 00:28:35.170 "driver_specific": { 00:28:35.170 "nvme": [ 00:28:35.170 { 00:28:35.170 "trid": { 00:28:35.170 "trtype": "TCP", 00:28:35.170 "adrfam": "IPv4", 00:28:35.170 "traddr": "10.0.0.2", 00:28:35.170 "trsvcid": "4420", 00:28:35.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.170 }, 00:28:35.170 "ctrlr_data": { 00:28:35.170 "cntlid": 1, 00:28:35.171 "vendor_id": "0x8086", 00:28:35.171 "model_number": "SPDK bdev Controller", 00:28:35.171 "serial_number": "00000000000000000000", 00:28:35.171 "firmware_revision": "24.09", 00:28:35.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.171 "oacs": { 00:28:35.171 "security": 0, 00:28:35.171 "format": 0, 00:28:35.171 "firmware": 0, 00:28:35.171 "ns_manage": 0 00:28:35.171 }, 00:28:35.171 "multi_ctrlr": true, 00:28:35.171 "ana_reporting": false 00:28:35.171 }, 00:28:35.171 "vs": { 00:28:35.171 "nvme_version": "1.3" 00:28:35.171 }, 00:28:35.171 "ns_data": { 00:28:35.171 "id": 1, 00:28:35.171 "can_share": true 00:28:35.171 } 00:28:35.171 } 00:28:35.171 ], 00:28:35.171 "mp_policy": "active_passive" 00:28:35.171 } 00:28:35.171 } 00:28:35.171 ] 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.171 [2024-07-11 11:14:49.394879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.171 [2024-07-11 11:14:49.394963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2302c40 (9): Bad file descriptor 00:28:35.171 [2024-07-11 11:14:49.567906] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.171 [ 00:28:35.171 { 00:28:35.171 "name": "nvme0n1", 00:28:35.171 "aliases": [ 00:28:35.171 "85fdba13-c890-4a91-a9c2-0b087ef3ebdf" 00:28:35.171 ], 00:28:35.171 "product_name": "NVMe disk", 00:28:35.171 "block_size": 512, 00:28:35.171 "num_blocks": 2097152, 00:28:35.171 "uuid": "85fdba13-c890-4a91-a9c2-0b087ef3ebdf", 00:28:35.171 "assigned_rate_limits": { 00:28:35.171 "rw_ios_per_sec": 0, 00:28:35.171 "rw_mbytes_per_sec": 0, 00:28:35.171 "r_mbytes_per_sec": 0, 00:28:35.171 "w_mbytes_per_sec": 0 00:28:35.171 }, 00:28:35.171 "claimed": false, 00:28:35.171 "zoned": false, 00:28:35.171 "supported_io_types": { 00:28:35.171 "read": true, 00:28:35.171 "write": true, 00:28:35.171 "unmap": false, 00:28:35.171 "flush": true, 00:28:35.171 "reset": true, 00:28:35.171 "nvme_admin": true, 00:28:35.171 "nvme_io": true, 00:28:35.171 "nvme_io_md": false, 00:28:35.171 "write_zeroes": true, 00:28:35.171 "zcopy": false, 00:28:35.171 "get_zone_info": false, 00:28:35.171 "zone_management": false, 00:28:35.171 "zone_append": false, 00:28:35.171 "compare": true, 00:28:35.171 "compare_and_write": true, 00:28:35.171 "abort": true, 00:28:35.171 "seek_hole": false, 00:28:35.171 "seek_data": false, 00:28:35.171 "copy": true, 00:28:35.171 "nvme_iov_md": false 00:28:35.171 }, 00:28:35.171 "memory_domains": [ 00:28:35.171 { 00:28:35.171 "dma_device_id": "system", 00:28:35.171 "dma_device_type": 1 00:28:35.171 } 00:28:35.171 ], 00:28:35.171 "driver_specific": { 00:28:35.171 "nvme": [ 00:28:35.171 { 00:28:35.171 "trid": { 00:28:35.171 "trtype": "TCP", 00:28:35.171 "adrfam": "IPv4", 00:28:35.171 "traddr": "10.0.0.2", 00:28:35.171 "trsvcid": "4420", 00:28:35.171 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.171 }, 00:28:35.171 "ctrlr_data": { 00:28:35.171 "cntlid": 2, 00:28:35.171 "vendor_id": "0x8086", 00:28:35.171 "model_number": "SPDK bdev Controller", 00:28:35.171 "serial_number": "00000000000000000000", 00:28:35.171 "firmware_revision": "24.09", 00:28:35.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.171 "oacs": { 00:28:35.171 "security": 0, 00:28:35.171 "format": 0, 00:28:35.171 "firmware": 0, 00:28:35.171 "ns_manage": 0 00:28:35.171 }, 00:28:35.171 "multi_ctrlr": true, 00:28:35.171 "ana_reporting": false 00:28:35.171 }, 00:28:35.171 "vs": { 00:28:35.171 "nvme_version": "1.3" 00:28:35.171 }, 00:28:35.171 "ns_data": { 00:28:35.171 "id": 1, 00:28:35.171 "can_share": true 00:28:35.171 } 00:28:35.171 } 00:28:35.171 ], 00:28:35.171 "mp_policy": "active_passive" 00:28:35.171 } 00:28:35.171 } 00:28:35.171 ] 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.171 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Pd8BoLavrA 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Pd8BoLavrA 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.431 [2024-07-11 11:14:49.619629] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:35.431 [2024-07-11 11:14:49.619838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pd8BoLavrA 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.431 [2024-07-11 11:14:49.627632] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pd8BoLavrA 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.431 [2024-07-11 11:14:49.635663] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:35.431 [2024-07-11 11:14:49.635725] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:35.431 nvme0n1 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.431 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.431 [ 00:28:35.431 { 00:28:35.431 "name": "nvme0n1", 00:28:35.431 "aliases": [ 00:28:35.431 "85fdba13-c890-4a91-a9c2-0b087ef3ebdf" 00:28:35.431 ], 00:28:35.431 "product_name": "NVMe disk", 00:28:35.431 "block_size": 512, 00:28:35.431 "num_blocks": 2097152, 00:28:35.431 "uuid": "85fdba13-c890-4a91-a9c2-0b087ef3ebdf", 00:28:35.431 "assigned_rate_limits": { 00:28:35.431 "rw_ios_per_sec": 0, 00:28:35.431 "rw_mbytes_per_sec": 0, 00:28:35.431 "r_mbytes_per_sec": 0, 00:28:35.431 "w_mbytes_per_sec": 0 00:28:35.431 }, 00:28:35.431 "claimed": false, 00:28:35.431 "zoned": false, 00:28:35.431 "supported_io_types": { 00:28:35.431 "read": true, 00:28:35.431 "write": true, 00:28:35.431 "unmap": false, 00:28:35.431 "flush": true, 00:28:35.431 "reset": true, 00:28:35.431 "nvme_admin": true, 00:28:35.431 "nvme_io": true, 00:28:35.431 "nvme_io_md": false, 00:28:35.431 "write_zeroes": true, 00:28:35.431 "zcopy": false, 00:28:35.431 "get_zone_info": false, 00:28:35.431 "zone_management": false, 00:28:35.431 "zone_append": false, 00:28:35.432 "compare": true, 00:28:35.432 "compare_and_write": true, 00:28:35.432 "abort": true, 00:28:35.432 "seek_hole": false, 00:28:35.432 "seek_data": false, 00:28:35.432 "copy": true, 00:28:35.432 "nvme_iov_md": false 00:28:35.432 }, 00:28:35.432 "memory_domains": [ 00:28:35.432 { 00:28:35.432 "dma_device_id": "system", 00:28:35.432 "dma_device_type": 1 00:28:35.432 } 00:28:35.432 ], 00:28:35.432 "driver_specific": { 00:28:35.432 "nvme": [ 00:28:35.432 { 00:28:35.432 "trid": { 00:28:35.432 "trtype": "TCP", 00:28:35.432 "adrfam": "IPv4", 00:28:35.432 "traddr": "10.0.0.2", 00:28:35.432 "trsvcid": "4421", 00:28:35.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.432 }, 00:28:35.432 "ctrlr_data": { 00:28:35.432 "cntlid": 3, 00:28:35.432 "vendor_id": "0x8086", 00:28:35.432 "model_number": "SPDK bdev Controller", 00:28:35.432 "serial_number": "00000000000000000000", 00:28:35.432 "firmware_revision": "24.09", 00:28:35.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.432 "oacs": { 00:28:35.432 "security": 0, 00:28:35.432 "format": 0, 00:28:35.432 "firmware": 0, 00:28:35.432 "ns_manage": 0 00:28:35.432 }, 00:28:35.432 "multi_ctrlr": true, 00:28:35.432 "ana_reporting": false 00:28:35.432 }, 00:28:35.432 "vs": { 00:28:35.432 "nvme_version": "1.3" 00:28:35.432 }, 00:28:35.432 "ns_data": { 00:28:35.432 "id": 1, 00:28:35.432 "can_share": true 00:28:35.432 } 00:28:35.432 } 00:28:35.432 ], 00:28:35.432 "mp_policy": "active_passive" 00:28:35.432 } 00:28:35.432 } 00:28:35.432 ] 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Pd8BoLavrA 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:35.432 rmmod nvme_tcp 00:28:35.432 rmmod nvme_fabrics 00:28:35.432 rmmod nvme_keyring 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 341361 ']' 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 341361 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 341361 ']' 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 341361 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 341361 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 341361' 00:28:35.432 killing process with pid 341361 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 341361 00:28:35.432 [2024-07-11 11:14:49.819908] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:35.432 [2024-07-11 11:14:49.819943] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:35.432 11:14:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 341361 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.693 11:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.231 11:14:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:38.231 00:28:38.231 real 0m5.677s 00:28:38.231 user 0m2.127s 00:28:38.231 sys 0m1.947s 00:28:38.231 11:14:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.231 11:14:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:38.231 ************************************ 00:28:38.231 END TEST nvmf_async_init 00:28:38.231 ************************************ 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:38.231 11:14:52 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:38.231 ************************************ 00:28:38.231 START TEST dma 00:28:38.231 ************************************ 00:28:38.231 11:14:52 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:38.231 * Looking for test storage... 00:28:38.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.231 11:14:52 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.231 11:14:52 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.231 11:14:52 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.231 11:14:52 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.231 11:14:52 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.231 11:14:52 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.231 11:14:52 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.231 11:14:52 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:38.231 11:14:52 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.231 11:14:52 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.231 11:14:52 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:38.231 11:14:52 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:38.231 00:28:38.231 real 0m0.076s 00:28:38.231 user 0m0.038s 00:28:38.231 sys 0m0.044s 00:28:38.231 11:14:52 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.231 11:14:52 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:38.231 ************************************ 00:28:38.231 END TEST dma 00:28:38.231 ************************************ 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:38.231 11:14:52 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.231 11:14:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:38.231 ************************************ 00:28:38.231 START TEST nvmf_identify 00:28:38.231 ************************************ 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:38.231 * Looking for test storage... 00:28:38.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.231 11:14:52 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:38.232 11:14:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.136 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.136 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:40.136 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:40.136 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:40.136 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:40.136 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:40.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:40.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:40.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:40.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:40.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:40.137 00:28:40.137 --- 10.0.0.2 ping statistics --- 00:28:40.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.137 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:28:40.137 00:28:40.137 --- 10.0.0.1 ping statistics --- 00:28:40.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.137 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=343482 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 343482 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 343482 ']' 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.137 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.137 [2024-07-11 11:14:54.432642] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:40.137 [2024-07-11 11:14:54.432729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.137 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.137 [2024-07-11 11:14:54.495703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.395 [2024-07-11 11:14:54.578749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.395 [2024-07-11 11:14:54.578804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.395 [2024-07-11 11:14:54.578833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.395 [2024-07-11 11:14:54.578845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.395 [2024-07-11 11:14:54.578855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.395 [2024-07-11 11:14:54.578923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.395 [2024-07-11 11:14:54.579007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.395 [2024-07-11 11:14:54.579076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.395 [2024-07-11 11:14:54.579078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.395 [2024-07-11 11:14:54.706616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.395 Malloc0 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.395 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.396 [2024-07-11 11:14:54.788314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.396 [ 00:28:40.396 { 00:28:40.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:40.396 "subtype": "Discovery", 00:28:40.396 "listen_addresses": [ 00:28:40.396 { 00:28:40.396 "trtype": "TCP", 00:28:40.396 "adrfam": "IPv4", 00:28:40.396 "traddr": "10.0.0.2", 00:28:40.396 "trsvcid": "4420" 00:28:40.396 } 00:28:40.396 ], 00:28:40.396 "allow_any_host": true, 00:28:40.396 "hosts": [] 00:28:40.396 }, 00:28:40.396 { 00:28:40.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.396 "subtype": "NVMe", 00:28:40.396 "listen_addresses": [ 00:28:40.396 { 00:28:40.396 "trtype": "TCP", 00:28:40.396 "adrfam": "IPv4", 00:28:40.396 "traddr": "10.0.0.2", 00:28:40.396 "trsvcid": "4420" 00:28:40.396 } 00:28:40.396 ], 00:28:40.396 "allow_any_host": true, 00:28:40.396 "hosts": [], 00:28:40.396 "serial_number": "SPDK00000000000001", 00:28:40.396 "model_number": "SPDK bdev Controller", 00:28:40.396 "max_namespaces": 32, 00:28:40.396 "min_cntlid": 1, 00:28:40.396 "max_cntlid": 65519, 00:28:40.396 "namespaces": [ 00:28:40.396 { 00:28:40.396 "nsid": 1, 00:28:40.396 "bdev_name": "Malloc0", 00:28:40.396 "name": "Malloc0", 00:28:40.396 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:40.396 "eui64": "ABCDEF0123456789", 00:28:40.396 "uuid": "5d5a2e83-0835-41f5-963c-a50dcee595de" 00:28:40.396 } 00:28:40.396 ] 00:28:40.396 } 00:28:40.396 ] 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.396 11:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:40.657 [2024-07-11 11:14:54.829807] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:40.657 [2024-07-11 11:14:54.829852] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343510 ] 00:28:40.657 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.657 [2024-07-11 11:14:54.865025] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:40.657 [2024-07-11 11:14:54.865109] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:40.657 [2024-07-11 11:14:54.865120] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:40.657 [2024-07-11 11:14:54.865141] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:40.657 [2024-07-11 11:14:54.865151] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:40.657 [2024-07-11 11:14:54.865363] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:40.657 [2024-07-11 11:14:54.865421] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2313ae0 0 00:28:40.657 [2024-07-11 11:14:54.871771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:40.657 [2024-07-11 11:14:54.871793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:40.657 [2024-07-11 11:14:54.871801] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:40.657 [2024-07-11 11:14:54.871807] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:40.657 [2024-07-11 11:14:54.871866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.871880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.871889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.657 [2024-07-11 11:14:54.871909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:40.657 [2024-07-11 11:14:54.871935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.657 [2024-07-11 11:14:54.879764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.657 [2024-07-11 11:14:54.879782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.657 [2024-07-11 11:14:54.879789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.879798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.657 [2024-07-11 11:14:54.879815] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:40.657 [2024-07-11 11:14:54.879827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:40.657 [2024-07-11 11:14:54.879836] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:40.657 [2024-07-11 11:14:54.879861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.879870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.879876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.657 [2024-07-11 11:14:54.879887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.657 [2024-07-11 11:14:54.879911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.657 [2024-07-11 11:14:54.880054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.657 [2024-07-11 11:14:54.880069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.657 [2024-07-11 11:14:54.880076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.657 [2024-07-11 11:14:54.880092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:40.657 [2024-07-11 11:14:54.880105] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:40.657 [2024-07-11 11:14:54.880124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.657 [2024-07-11 11:14:54.880150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.657 [2024-07-11 11:14:54.880170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.657 [2024-07-11 11:14:54.880244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.657 [2024-07-11 11:14:54.880256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.657 [2024-07-11 11:14:54.880263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.657 [2024-07-11 11:14:54.880279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:40.657 [2024-07-11 11:14:54.880293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:40.657 [2024-07-11 11:14:54.880304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.657 [2024-07-11 11:14:54.880328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.657 [2024-07-11 11:14:54.880348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.657 [2024-07-11 11:14:54.880430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.657 [2024-07-11 11:14:54.880443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.657 [2024-07-11 11:14:54.880450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.657 [2024-07-11 11:14:54.880467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:40.657 [2024-07-11 11:14:54.880483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.657 [2024-07-11 11:14:54.880509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.657 [2024-07-11 11:14:54.880529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.657 [2024-07-11 11:14:54.880602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.657 [2024-07-11 11:14:54.880614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.657 [2024-07-11 11:14:54.880620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.657 [2024-07-11 11:14:54.880627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.657 [2024-07-11 11:14:54.880637] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:40.658 [2024-07-11 11:14:54.880646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:40.658 [2024-07-11 11:14:54.880659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:40.658 [2024-07-11 11:14:54.880769] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:40.658 [2024-07-11 11:14:54.880784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:40.658 [2024-07-11 11:14:54.880801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.880808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.880814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.880825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.658 [2024-07-11 11:14:54.880846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.658 [2024-07-11 11:14:54.880958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.658 [2024-07-11 11:14:54.880970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.658 [2024-07-11 11:14:54.880977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.880984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.658 [2024-07-11 11:14:54.880992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:40.658 [2024-07-11 11:14:54.881008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.658 [2024-07-11 11:14:54.881054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.658 [2024-07-11 11:14:54.881132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.658 [2024-07-11 11:14:54.881145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.658 [2024-07-11 11:14:54.881151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.658 [2024-07-11 11:14:54.881166] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:40.658 [2024-07-11 11:14:54.881175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:40.658 [2024-07-11 11:14:54.881189] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:40.658 [2024-07-11 11:14:54.881203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:40.658 [2024-07-11 11:14:54.881220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.658 [2024-07-11 11:14:54.881259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.658 [2024-07-11 11:14:54.881388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.658 [2024-07-11 11:14:54.881400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.658 [2024-07-11 11:14:54.881407] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881414] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2313ae0): datao=0, datal=4096, cccid=0 00:28:40.658 [2024-07-11 11:14:54.881426] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236a240) on tqpair(0x2313ae0): expected_datao=0, payload_size=4096 00:28:40.658 [2024-07-11 11:14:54.881435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881448] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881457] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.658 [2024-07-11 11:14:54.881480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.658 [2024-07-11 11:14:54.881486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.658 [2024-07-11 11:14:54.881507] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:40.658 [2024-07-11 11:14:54.881520] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:40.658 [2024-07-11 11:14:54.881529] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:40.658 [2024-07-11 11:14:54.881538] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:40.658 [2024-07-11 11:14:54.881547] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:40.658 [2024-07-11 11:14:54.881555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:40.658 [2024-07-11 11:14:54.881571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:40.658 [2024-07-11 11:14:54.881584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.658 [2024-07-11 11:14:54.881630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.658 [2024-07-11 11:14:54.881720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.658 [2024-07-11 11:14:54.881734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.658 [2024-07-11 11:14:54.881741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.658 [2024-07-11 11:14:54.881772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.658 [2024-07-11 11:14:54.881806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.658 [2024-07-11 11:14:54.881837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.658 [2024-07-11 11:14:54.881874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.658 [2024-07-11 11:14:54.881904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:40.658 [2024-07-11 11:14:54.881925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:40.658 [2024-07-11 11:14:54.881938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.881945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.881956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.658 [2024-07-11 11:14:54.881978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a240, cid 0, qid 0 00:28:40.658 [2024-07-11 11:14:54.881989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a3c0, cid 1, qid 0 00:28:40.658 [2024-07-11 11:14:54.881997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a540, cid 2, qid 0 00:28:40.658 [2024-07-11 11:14:54.882005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.658 [2024-07-11 11:14:54.882012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a840, cid 4, qid 0 00:28:40.658 [2024-07-11 11:14:54.882150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.658 [2024-07-11 11:14:54.882162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.658 [2024-07-11 11:14:54.882169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.882176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a840) on tqpair=0x2313ae0 00:28:40.658 [2024-07-11 11:14:54.882185] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:40.658 [2024-07-11 11:14:54.882195] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:40.658 [2024-07-11 11:14:54.882211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.882220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2313ae0) 00:28:40.658 [2024-07-11 11:14:54.882231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.658 [2024-07-11 11:14:54.882251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a840, cid 4, qid 0 00:28:40.658 [2024-07-11 11:14:54.882337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.658 [2024-07-11 11:14:54.882349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.658 [2024-07-11 11:14:54.882356] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.882362] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2313ae0): datao=0, datal=4096, cccid=4 00:28:40.658 [2024-07-11 11:14:54.882370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236a840) on tqpair(0x2313ae0): expected_datao=0, payload_size=4096 00:28:40.658 [2024-07-11 11:14:54.882377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.882392] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.882405] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.658 [2024-07-11 11:14:54.882417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.658 [2024-07-11 11:14:54.882427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.659 [2024-07-11 11:14:54.882434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a840) on tqpair=0x2313ae0 00:28:40.659 [2024-07-11 11:14:54.882459] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:40.659 [2024-07-11 11:14:54.882500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2313ae0) 00:28:40.659 [2024-07-11 11:14:54.882522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.659 [2024-07-11 11:14:54.882534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2313ae0) 00:28:40.659 [2024-07-11 11:14:54.882556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.659 [2024-07-11 11:14:54.882583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a840, cid 4, qid 0 00:28:40.659 [2024-07-11 11:14:54.882595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a9c0, cid 5, qid 0 00:28:40.659 [2024-07-11 11:14:54.882715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.659 [2024-07-11 11:14:54.882727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.659 [2024-07-11 11:14:54.882734] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882740] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2313ae0): datao=0, datal=1024, cccid=4 00:28:40.659 [2024-07-11 11:14:54.882748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236a840) on tqpair(0x2313ae0): expected_datao=0, payload_size=1024 00:28:40.659 [2024-07-11 11:14:54.882763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882774] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882781] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.659 [2024-07-11 11:14:54.882799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.659 [2024-07-11 11:14:54.882805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.882812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a9c0) on tqpair=0x2313ae0 00:28:40.659 [2024-07-11 11:14:54.922882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.659 [2024-07-11 11:14:54.922900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.659 [2024-07-11 11:14:54.922907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.922914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a840) on tqpair=0x2313ae0 00:28:40.659 [2024-07-11 11:14:54.922933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.922941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2313ae0) 00:28:40.659 [2024-07-11 11:14:54.922952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.659 [2024-07-11 11:14:54.922982] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a840, cid 4, qid 0 00:28:40.659 [2024-07-11 11:14:54.923077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.659 [2024-07-11 11:14:54.923094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.659 [2024-07-11 11:14:54.923102] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.923108] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2313ae0): datao=0, datal=3072, cccid=4 00:28:40.659 [2024-07-11 11:14:54.923116] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236a840) on tqpair(0x2313ae0): expected_datao=0, payload_size=3072 00:28:40.659 [2024-07-11 11:14:54.923123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.923143] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.923152] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.966773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.659 [2024-07-11 11:14:54.966791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.659 [2024-07-11 11:14:54.966799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.966806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a840) on tqpair=0x2313ae0 00:28:40.659 [2024-07-11 11:14:54.966822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.966831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2313ae0) 00:28:40.659 [2024-07-11 11:14:54.966854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.659 [2024-07-11 11:14:54.966885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a840, cid 4, qid 0 00:28:40.659 [2024-07-11 11:14:54.966976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.659 [2024-07-11 11:14:54.966988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.659 [2024-07-11 11:14:54.966995] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.967001] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2313ae0): datao=0, datal=8, cccid=4 00:28:40.659 [2024-07-11 11:14:54.967009] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x236a840) on tqpair(0x2313ae0): expected_datao=0, payload_size=8 00:28:40.659 [2024-07-11 11:14:54.967016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.967026] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:54.967034] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:55.007834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.659 [2024-07-11 11:14:55.007854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.659 [2024-07-11 11:14:55.007861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.659 [2024-07-11 11:14:55.007868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a840) on tqpair=0x2313ae0 00:28:40.659 ===================================================== 00:28:40.659 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:40.659 ===================================================== 00:28:40.659 Controller Capabilities/Features 00:28:40.659 ================================ 00:28:40.659 Vendor ID: 0000 00:28:40.659 Subsystem Vendor ID: 0000 00:28:40.659 Serial Number: .................... 00:28:40.659 Model Number: ........................................ 00:28:40.659 Firmware Version: 24.09 00:28:40.659 Recommended Arb Burst: 0 00:28:40.659 IEEE OUI Identifier: 00 00 00 00:28:40.659 Multi-path I/O 00:28:40.659 May have multiple subsystem ports: No 00:28:40.659 May have multiple controllers: No 00:28:40.659 Associated with SR-IOV VF: No 00:28:40.659 Max Data Transfer Size: 131072 00:28:40.659 Max Number of Namespaces: 0 00:28:40.659 Max Number of I/O Queues: 1024 00:28:40.659 NVMe Specification Version (VS): 1.3 00:28:40.659 NVMe Specification Version (Identify): 1.3 00:28:40.659 Maximum Queue Entries: 128 00:28:40.659 Contiguous Queues Required: Yes 00:28:40.659 Arbitration Mechanisms Supported 00:28:40.659 Weighted Round Robin: Not Supported 00:28:40.659 Vendor Specific: Not Supported 00:28:40.659 Reset Timeout: 15000 ms 00:28:40.659 Doorbell Stride: 4 bytes 00:28:40.659 NVM Subsystem Reset: Not Supported 00:28:40.659 Command Sets Supported 00:28:40.659 NVM Command Set: Supported 00:28:40.659 Boot Partition: Not Supported 00:28:40.659 Memory Page Size Minimum: 4096 bytes 00:28:40.659 Memory Page Size Maximum: 4096 bytes 00:28:40.659 Persistent Memory Region: Not Supported 00:28:40.659 Optional Asynchronous Events Supported 00:28:40.659 Namespace Attribute Notices: Not Supported 00:28:40.659 Firmware Activation Notices: Not Supported 00:28:40.659 ANA Change Notices: Not Supported 00:28:40.659 PLE Aggregate Log Change Notices: Not Supported 00:28:40.659 LBA Status Info Alert Notices: Not Supported 00:28:40.659 EGE Aggregate Log Change Notices: Not Supported 00:28:40.659 Normal NVM Subsystem Shutdown event: Not Supported 00:28:40.659 Zone Descriptor Change Notices: Not Supported 00:28:40.659 Discovery Log Change Notices: Supported 00:28:40.659 Controller Attributes 00:28:40.659 128-bit Host Identifier: Not Supported 00:28:40.659 Non-Operational Permissive Mode: Not Supported 00:28:40.659 NVM Sets: Not Supported 00:28:40.659 Read Recovery Levels: Not Supported 00:28:40.659 Endurance Groups: Not Supported 00:28:40.659 Predictable Latency Mode: Not Supported 00:28:40.659 Traffic Based Keep ALive: Not Supported 00:28:40.659 Namespace Granularity: Not Supported 00:28:40.659 SQ Associations: Not Supported 00:28:40.659 UUID List: Not Supported 00:28:40.659 Multi-Domain Subsystem: Not Supported 00:28:40.659 Fixed Capacity Management: Not Supported 00:28:40.659 Variable Capacity Management: Not Supported 00:28:40.659 Delete Endurance Group: Not Supported 00:28:40.659 Delete NVM Set: Not Supported 00:28:40.659 Extended LBA Formats Supported: Not Supported 00:28:40.659 Flexible Data Placement Supported: Not Supported 00:28:40.659 00:28:40.659 Controller Memory Buffer Support 00:28:40.659 ================================ 00:28:40.659 Supported: No 00:28:40.659 00:28:40.659 Persistent Memory Region Support 00:28:40.659 ================================ 00:28:40.659 Supported: No 00:28:40.659 00:28:40.659 Admin Command Set Attributes 00:28:40.659 ============================ 00:28:40.659 Security Send/Receive: Not Supported 00:28:40.659 Format NVM: Not Supported 00:28:40.659 Firmware Activate/Download: Not Supported 00:28:40.659 Namespace Management: Not Supported 00:28:40.659 Device Self-Test: Not Supported 00:28:40.659 Directives: Not Supported 00:28:40.659 NVMe-MI: Not Supported 00:28:40.659 Virtualization Management: Not Supported 00:28:40.659 Doorbell Buffer Config: Not Supported 00:28:40.659 Get LBA Status Capability: Not Supported 00:28:40.659 Command & Feature Lockdown Capability: Not Supported 00:28:40.659 Abort Command Limit: 1 00:28:40.659 Async Event Request Limit: 4 00:28:40.659 Number of Firmware Slots: N/A 00:28:40.659 Firmware Slot 1 Read-Only: N/A 00:28:40.659 Firmware Activation Without Reset: N/A 00:28:40.659 Multiple Update Detection Support: N/A 00:28:40.659 Firmware Update Granularity: No Information Provided 00:28:40.659 Per-Namespace SMART Log: No 00:28:40.659 Asymmetric Namespace Access Log Page: Not Supported 00:28:40.659 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:40.659 Command Effects Log Page: Not Supported 00:28:40.659 Get Log Page Extended Data: Supported 00:28:40.660 Telemetry Log Pages: Not Supported 00:28:40.660 Persistent Event Log Pages: Not Supported 00:28:40.660 Supported Log Pages Log Page: May Support 00:28:40.660 Commands Supported & Effects Log Page: Not Supported 00:28:40.660 Feature Identifiers & Effects Log Page:May Support 00:28:40.660 NVMe-MI Commands & Effects Log Page: May Support 00:28:40.660 Data Area 4 for Telemetry Log: Not Supported 00:28:40.660 Error Log Page Entries Supported: 128 00:28:40.660 Keep Alive: Not Supported 00:28:40.660 00:28:40.660 NVM Command Set Attributes 00:28:40.660 ========================== 00:28:40.660 Submission Queue Entry Size 00:28:40.660 Max: 1 00:28:40.660 Min: 1 00:28:40.660 Completion Queue Entry Size 00:28:40.660 Max: 1 00:28:40.660 Min: 1 00:28:40.660 Number of Namespaces: 0 00:28:40.660 Compare Command: Not Supported 00:28:40.660 Write Uncorrectable Command: Not Supported 00:28:40.660 Dataset Management Command: Not Supported 00:28:40.660 Write Zeroes Command: Not Supported 00:28:40.660 Set Features Save Field: Not Supported 00:28:40.660 Reservations: Not Supported 00:28:40.660 Timestamp: Not Supported 00:28:40.660 Copy: Not Supported 00:28:40.660 Volatile Write Cache: Not Present 00:28:40.660 Atomic Write Unit (Normal): 1 00:28:40.660 Atomic Write Unit (PFail): 1 00:28:40.660 Atomic Compare & Write Unit: 1 00:28:40.660 Fused Compare & Write: Supported 00:28:40.660 Scatter-Gather List 00:28:40.660 SGL Command Set: Supported 00:28:40.660 SGL Keyed: Supported 00:28:40.660 SGL Bit Bucket Descriptor: Not Supported 00:28:40.660 SGL Metadata Pointer: Not Supported 00:28:40.660 Oversized SGL: Not Supported 00:28:40.660 SGL Metadata Address: Not Supported 00:28:40.660 SGL Offset: Supported 00:28:40.660 Transport SGL Data Block: Not Supported 00:28:40.660 Replay Protected Memory Block: Not Supported 00:28:40.660 00:28:40.660 Firmware Slot Information 00:28:40.660 ========================= 00:28:40.660 Active slot: 0 00:28:40.660 00:28:40.660 00:28:40.660 Error Log 00:28:40.660 ========= 00:28:40.660 00:28:40.660 Active Namespaces 00:28:40.660 ================= 00:28:40.660 Discovery Log Page 00:28:40.660 ================== 00:28:40.660 Generation Counter: 2 00:28:40.660 Number of Records: 2 00:28:40.660 Record Format: 0 00:28:40.660 00:28:40.660 Discovery Log Entry 0 00:28:40.660 ---------------------- 00:28:40.660 Transport Type: 3 (TCP) 00:28:40.660 Address Family: 1 (IPv4) 00:28:40.660 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:40.660 Entry Flags: 00:28:40.660 Duplicate Returned Information: 1 00:28:40.660 Explicit Persistent Connection Support for Discovery: 1 00:28:40.660 Transport Requirements: 00:28:40.660 Secure Channel: Not Required 00:28:40.660 Port ID: 0 (0x0000) 00:28:40.660 Controller ID: 65535 (0xffff) 00:28:40.660 Admin Max SQ Size: 128 00:28:40.660 Transport Service Identifier: 4420 00:28:40.660 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:40.660 Transport Address: 10.0.0.2 00:28:40.660 Discovery Log Entry 1 00:28:40.660 ---------------------- 00:28:40.660 Transport Type: 3 (TCP) 00:28:40.660 Address Family: 1 (IPv4) 00:28:40.660 Subsystem Type: 2 (NVM Subsystem) 00:28:40.660 Entry Flags: 00:28:40.660 Duplicate Returned Information: 0 00:28:40.660 Explicit Persistent Connection Support for Discovery: 0 00:28:40.660 Transport Requirements: 00:28:40.660 Secure Channel: Not Required 00:28:40.660 Port ID: 0 (0x0000) 00:28:40.660 Controller ID: 65535 (0xffff) 00:28:40.660 Admin Max SQ Size: 128 00:28:40.660 Transport Service Identifier: 4420 00:28:40.660 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:40.660 Transport Address: 10.0.0.2 [2024-07-11 11:14:55.007986] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:40.660 [2024-07-11 11:14:55.008009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a240) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.660 [2024-07-11 11:14:55.008030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a3c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.660 [2024-07-11 11:14:55.008047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a540) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.660 [2024-07-11 11:14:55.008063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.660 [2024-07-11 11:14:55.008092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.660 [2024-07-11 11:14:55.008138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.660 [2024-07-11 11:14:55.008162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.660 [2024-07-11 11:14:55.008279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.660 [2024-07-11 11:14:55.008292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.660 [2024-07-11 11:14:55.008299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.660 [2024-07-11 11:14:55.008342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.660 [2024-07-11 11:14:55.008368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.660 [2024-07-11 11:14:55.008458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.660 [2024-07-11 11:14:55.008470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.660 [2024-07-11 11:14:55.008477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008493] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:40.660 [2024-07-11 11:14:55.008502] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:40.660 [2024-07-11 11:14:55.008517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.660 [2024-07-11 11:14:55.008543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.660 [2024-07-11 11:14:55.008562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.660 [2024-07-11 11:14:55.008637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.660 [2024-07-11 11:14:55.008650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.660 [2024-07-11 11:14:55.008657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.660 [2024-07-11 11:14:55.008705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.660 [2024-07-11 11:14:55.008725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.660 [2024-07-11 11:14:55.008822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.660 [2024-07-11 11:14:55.008838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.660 [2024-07-11 11:14:55.008845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.008867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.008883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.660 [2024-07-11 11:14:55.008893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.660 [2024-07-11 11:14:55.008914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.660 [2024-07-11 11:14:55.008990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.660 [2024-07-11 11:14:55.009004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.660 [2024-07-11 11:14:55.009010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.009017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.660 [2024-07-11 11:14:55.009033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.009042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.660 [2024-07-11 11:14:55.009048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.660 [2024-07-11 11:14:55.009058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.660 [2024-07-11 11:14:55.009078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.660 [2024-07-11 11:14:55.009153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.660 [2024-07-11 11:14:55.009165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.660 [2024-07-11 11:14:55.009172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.009194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.009220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.009239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.009330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.009343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.009349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.009371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.009397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.009417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.009490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.009502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.009513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.009535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.009561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.009581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.009654] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.009667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.009673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.009696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.009723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.009743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.009821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.009834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.009842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.009864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.009880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.009890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.009912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.009984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.009998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.010005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.010028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.010053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.010073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.010148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.010161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.010168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.010195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.010221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.010241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.010319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.010332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.010339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.010361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.010387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.010407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.010497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.010509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.010516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.010538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.010563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.010583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.010672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.010684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.010691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.010712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.010727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.661 [2024-07-11 11:14:55.010738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.661 [2024-07-11 11:14:55.014761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.661 [2024-07-11 11:14:55.014784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.661 [2024-07-11 11:14:55.014796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.661 [2024-07-11 11:14:55.014803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.014810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.661 [2024-07-11 11:14:55.014832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.661 [2024-07-11 11:14:55.014843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.662 [2024-07-11 11:14:55.014850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2313ae0) 00:28:40.662 [2024-07-11 11:14:55.014861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.662 [2024-07-11 11:14:55.014883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x236a6c0, cid 3, qid 0 00:28:40.662 [2024-07-11 11:14:55.014977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.662 [2024-07-11 11:14:55.014992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.662 [2024-07-11 11:14:55.015010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.662 [2024-07-11 11:14:55.015018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x236a6c0) on tqpair=0x2313ae0 00:28:40.662 [2024-07-11 11:14:55.015032] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:40.662 00:28:40.662 11:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:40.662 [2024-07-11 11:14:55.048531] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:40.662 [2024-07-11 11:14:55.048571] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343533 ] 00:28:40.662 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.924 [2024-07-11 11:14:55.081653] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:40.924 [2024-07-11 11:14:55.081702] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:40.924 [2024-07-11 11:14:55.081711] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:40.924 [2024-07-11 11:14:55.081725] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:40.924 [2024-07-11 11:14:55.081749] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:40.924 [2024-07-11 11:14:55.081926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:40.924 [2024-07-11 11:14:55.081965] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfe3ae0 0 00:28:40.924 [2024-07-11 11:14:55.092771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:40.924 [2024-07-11 11:14:55.092790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:40.924 [2024-07-11 11:14:55.092797] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:40.924 [2024-07-11 11:14:55.092804] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:40.924 [2024-07-11 11:14:55.092841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.092852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.092859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.092872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:40.924 [2024-07-11 11:14:55.092897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.100771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.100793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.100801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.100808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.100825] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:40.924 [2024-07-11 11:14:55.100835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:40.924 [2024-07-11 11:14:55.100844] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:40.924 [2024-07-11 11:14:55.100861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.100869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.100876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.100887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.924 [2024-07-11 11:14:55.100909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.101065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.101080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.101086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.101101] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:40.924 [2024-07-11 11:14:55.101114] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:40.924 [2024-07-11 11:14:55.101126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.101151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.924 [2024-07-11 11:14:55.101172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.101253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.101267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.101273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.101288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:40.924 [2024-07-11 11:14:55.101302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:40.924 [2024-07-11 11:14:55.101314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101327] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.101338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.924 [2024-07-11 11:14:55.101358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.101455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.101469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.101479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.101495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:40.924 [2024-07-11 11:14:55.101511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.101537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.924 [2024-07-11 11:14:55.101558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.101658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.101671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.101678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.101692] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:40.924 [2024-07-11 11:14:55.101701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:40.924 [2024-07-11 11:14:55.101714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:40.924 [2024-07-11 11:14:55.101824] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:40.924 [2024-07-11 11:14:55.101832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:40.924 [2024-07-11 11:14:55.101844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.101858] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.101868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.924 [2024-07-11 11:14:55.101904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.102084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.102099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.102105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.102112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.102120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:40.924 [2024-07-11 11:14:55.102137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.102146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.102152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.102163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.924 [2024-07-11 11:14:55.102183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.924 [2024-07-11 11:14:55.102264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.924 [2024-07-11 11:14:55.102281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.924 [2024-07-11 11:14:55.102289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.102295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.924 [2024-07-11 11:14:55.102303] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:40.924 [2024-07-11 11:14:55.102311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:40.924 [2024-07-11 11:14:55.102324] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:40.924 [2024-07-11 11:14:55.102338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:40.924 [2024-07-11 11:14:55.102351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.924 [2024-07-11 11:14:55.102358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.924 [2024-07-11 11:14:55.102369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.925 [2024-07-11 11:14:55.102390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.925 [2024-07-11 11:14:55.102526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.925 [2024-07-11 11:14:55.102541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.925 [2024-07-11 11:14:55.102548] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102554] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=4096, cccid=0 00:28:40.925 [2024-07-11 11:14:55.102562] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103a240) on tqpair(0xfe3ae0): expected_datao=0, payload_size=4096 00:28:40.925 [2024-07-11 11:14:55.102569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102579] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102587] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.925 [2024-07-11 11:14:55.102608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.925 [2024-07-11 11:14:55.102614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.925 [2024-07-11 11:14:55.102632] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:40.925 [2024-07-11 11:14:55.102644] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:40.925 [2024-07-11 11:14:55.102653] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:40.925 [2024-07-11 11:14:55.102659] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:40.925 [2024-07-11 11:14:55.102667] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:40.925 [2024-07-11 11:14:55.102675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.102688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.102700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.102728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.925 [2024-07-11 11:14:55.102750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.925 [2024-07-11 11:14:55.102892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.925 [2024-07-11 11:14:55.102904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.925 [2024-07-11 11:14:55.102911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.925 [2024-07-11 11:14:55.102927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.102951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.925 [2024-07-11 11:14:55.102960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.102982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.925 [2024-07-11 11:14:55.102991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.102998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.103013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.925 [2024-07-11 11:14:55.103022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.103058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.925 [2024-07-11 11:14:55.103067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.103114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.925 [2024-07-11 11:14:55.103150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a240, cid 0, qid 0 00:28:40.925 [2024-07-11 11:14:55.103161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a3c0, cid 1, qid 0 00:28:40.925 [2024-07-11 11:14:55.103168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a540, cid 2, qid 0 00:28:40.925 [2024-07-11 11:14:55.103175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.925 [2024-07-11 11:14:55.103182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.925 [2024-07-11 11:14:55.103409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.925 [2024-07-11 11:14:55.103424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.925 [2024-07-11 11:14:55.103434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.925 [2024-07-11 11:14:55.103450] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:40.925 [2024-07-11 11:14:55.103458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.103538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:40.925 [2024-07-11 11:14:55.103558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.925 [2024-07-11 11:14:55.103702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.925 [2024-07-11 11:14:55.103716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.925 [2024-07-11 11:14:55.103723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.925 [2024-07-11 11:14:55.103812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.103847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.103854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.103865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.925 [2024-07-11 11:14:55.103886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.925 [2024-07-11 11:14:55.104022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.925 [2024-07-11 11:14:55.104036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.925 [2024-07-11 11:14:55.104043] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104049] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=4096, cccid=4 00:28:40.925 [2024-07-11 11:14:55.104057] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103a840) on tqpair(0xfe3ae0): expected_datao=0, payload_size=4096 00:28:40.925 [2024-07-11 11:14:55.104064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104090] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.925 [2024-07-11 11:14:55.104178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.925 [2024-07-11 11:14:55.104185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.925 [2024-07-11 11:14:55.104219] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:40.925 [2024-07-11 11:14:55.104236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.104253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:40.925 [2024-07-11 11:14:55.104266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.925 [2024-07-11 11:14:55.104284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.925 [2024-07-11 11:14:55.104305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.925 [2024-07-11 11:14:55.104424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.925 [2024-07-11 11:14:55.104438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.925 [2024-07-11 11:14:55.104445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104451] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=4096, cccid=4 00:28:40.925 [2024-07-11 11:14:55.104459] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103a840) on tqpair(0xfe3ae0): expected_datao=0, payload_size=4096 00:28:40.925 [2024-07-11 11:14:55.104466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104483] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104492] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.925 [2024-07-11 11:14:55.104515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.925 [2024-07-11 11:14:55.104526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.925 [2024-07-11 11:14:55.104533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.104540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.104560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.104579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.104592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.104599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.104610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.104630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.926 [2024-07-11 11:14:55.104726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.926 [2024-07-11 11:14:55.104740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.926 [2024-07-11 11:14:55.104747] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.108763] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=4096, cccid=4 00:28:40.926 [2024-07-11 11:14:55.108776] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103a840) on tqpair(0xfe3ae0): expected_datao=0, payload_size=4096 00:28:40.926 [2024-07-11 11:14:55.108783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.108801] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.108810] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.108821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.926 [2024-07-11 11:14:55.108835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.926 [2024-07-11 11:14:55.108842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.108848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.108862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108930] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:40.926 [2024-07-11 11:14:55.108938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:40.926 [2024-07-11 11:14:55.108946] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:40.926 [2024-07-11 11:14:55.108965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.108973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.108997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.109009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.109031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.926 [2024-07-11 11:14:55.109071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.926 [2024-07-11 11:14:55.109083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a9c0, cid 5, qid 0 00:28:40.926 [2024-07-11 11:14:55.109250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.926 [2024-07-11 11:14:55.109265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.926 [2024-07-11 11:14:55.109271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.109288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.926 [2024-07-11 11:14:55.109297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.926 [2024-07-11 11:14:55.109304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a9c0) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.109326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.109345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.109369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a9c0, cid 5, qid 0 00:28:40.926 [2024-07-11 11:14:55.109501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.926 [2024-07-11 11:14:55.109513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.926 [2024-07-11 11:14:55.109520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a9c0) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.109542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.109561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.109580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a9c0, cid 5, qid 0 00:28:40.926 [2024-07-11 11:14:55.109702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.926 [2024-07-11 11:14:55.109714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.926 [2024-07-11 11:14:55.109720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a9c0) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.109742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.109769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.109790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a9c0, cid 5, qid 0 00:28:40.926 [2024-07-11 11:14:55.109920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.926 [2024-07-11 11:14:55.109934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.926 [2024-07-11 11:14:55.109940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a9c0) on tqpair=0xfe3ae0 00:28:40.926 [2024-07-11 11:14:55.109970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.109981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.109991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.110003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.110019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.110030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.110047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.110064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfe3ae0) 00:28:40.926 [2024-07-11 11:14:55.110096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.926 [2024-07-11 11:14:55.110117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a9c0, cid 5, qid 0 00:28:40.926 [2024-07-11 11:14:55.110131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a840, cid 4, qid 0 00:28:40.926 [2024-07-11 11:14:55.110154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103ab40, cid 6, qid 0 00:28:40.926 [2024-07-11 11:14:55.110162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103acc0, cid 7, qid 0 00:28:40.926 [2024-07-11 11:14:55.110377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.926 [2024-07-11 11:14:55.110391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.926 [2024-07-11 11:14:55.110398] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110404] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=8192, cccid=5 00:28:40.926 [2024-07-11 11:14:55.110412] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103a9c0) on tqpair(0xfe3ae0): expected_datao=0, payload_size=8192 00:28:40.926 [2024-07-11 11:14:55.110419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110454] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110466] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.926 [2024-07-11 11:14:55.110484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.926 [2024-07-11 11:14:55.110491] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=512, cccid=4 00:28:40.926 [2024-07-11 11:14:55.110505] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103a840) on tqpair(0xfe3ae0): expected_datao=0, payload_size=512 00:28:40.926 [2024-07-11 11:14:55.110513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110522] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110529] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.926 [2024-07-11 11:14:55.110546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.926 [2024-07-11 11:14:55.110552] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110558] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=512, cccid=6 00:28:40.926 [2024-07-11 11:14:55.110566] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103ab40) on tqpair(0xfe3ae0): expected_datao=0, payload_size=512 00:28:40.926 [2024-07-11 11:14:55.110573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110582] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.926 [2024-07-11 11:14:55.110589] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.110597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:40.927 [2024-07-11 11:14:55.110606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:40.927 [2024-07-11 11:14:55.110613] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.110619] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe3ae0): datao=0, datal=4096, cccid=7 00:28:40.927 [2024-07-11 11:14:55.110626] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103acc0) on tqpair(0xfe3ae0): expected_datao=0, payload_size=4096 00:28:40.927 [2024-07-11 11:14:55.110633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.110653] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.110663] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.150957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.927 [2024-07-11 11:14:55.150976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.927 [2024-07-11 11:14:55.150984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.150995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a9c0) on tqpair=0xfe3ae0 00:28:40.927 [2024-07-11 11:14:55.151014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.927 [2024-07-11 11:14:55.151025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.927 [2024-07-11 11:14:55.151032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.151039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a840) on tqpair=0xfe3ae0 00:28:40.927 [2024-07-11 11:14:55.151054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.927 [2024-07-11 11:14:55.151064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.927 [2024-07-11 11:14:55.151070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.151077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103ab40) on tqpair=0xfe3ae0 00:28:40.927 [2024-07-11 11:14:55.151088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.927 [2024-07-11 11:14:55.151097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.927 [2024-07-11 11:14:55.151103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.927 [2024-07-11 11:14:55.151110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103acc0) on tqpair=0xfe3ae0 00:28:40.927 ===================================================== 00:28:40.927 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.927 ===================================================== 00:28:40.927 Controller Capabilities/Features 00:28:40.927 ================================ 00:28:40.927 Vendor ID: 8086 00:28:40.927 Subsystem Vendor ID: 8086 00:28:40.927 Serial Number: SPDK00000000000001 00:28:40.927 Model Number: SPDK bdev Controller 00:28:40.927 Firmware Version: 24.09 00:28:40.927 Recommended Arb Burst: 6 00:28:40.927 IEEE OUI Identifier: e4 d2 5c 00:28:40.927 Multi-path I/O 00:28:40.927 May have multiple subsystem ports: Yes 00:28:40.927 May have multiple controllers: Yes 00:28:40.927 Associated with SR-IOV VF: No 00:28:40.927 Max Data Transfer Size: 131072 00:28:40.927 Max Number of Namespaces: 32 00:28:40.927 Max Number of I/O Queues: 127 00:28:40.927 NVMe Specification Version (VS): 1.3 00:28:40.927 NVMe Specification Version (Identify): 1.3 00:28:40.927 Maximum Queue Entries: 128 00:28:40.927 Contiguous Queues Required: Yes 00:28:40.927 Arbitration Mechanisms Supported 00:28:40.927 Weighted Round Robin: Not Supported 00:28:40.927 Vendor Specific: Not Supported 00:28:40.927 Reset Timeout: 15000 ms 00:28:40.927 Doorbell Stride: 4 bytes 00:28:40.927 NVM Subsystem Reset: Not Supported 00:28:40.927 Command Sets Supported 00:28:40.927 NVM Command Set: Supported 00:28:40.927 Boot Partition: Not Supported 00:28:40.927 Memory Page Size Minimum: 4096 bytes 00:28:40.927 Memory Page Size Maximum: 4096 bytes 00:28:40.927 Persistent Memory Region: Not Supported 00:28:40.927 Optional Asynchronous Events Supported 00:28:40.927 Namespace Attribute Notices: Supported 00:28:40.927 Firmware Activation Notices: Not Supported 00:28:40.927 ANA Change Notices: Not Supported 00:28:40.927 PLE Aggregate Log Change Notices: Not Supported 00:28:40.927 LBA Status Info Alert Notices: Not Supported 00:28:40.927 EGE Aggregate Log Change Notices: Not Supported 00:28:40.927 Normal NVM Subsystem Shutdown event: Not Supported 00:28:40.927 Zone Descriptor Change Notices: Not Supported 00:28:40.927 Discovery Log Change Notices: Not Supported 00:28:40.927 Controller Attributes 00:28:40.927 128-bit Host Identifier: Supported 00:28:40.927 Non-Operational Permissive Mode: Not Supported 00:28:40.927 NVM Sets: Not Supported 00:28:40.927 Read Recovery Levels: Not Supported 00:28:40.927 Endurance Groups: Not Supported 00:28:40.927 Predictable Latency Mode: Not Supported 00:28:40.927 Traffic Based Keep ALive: Not Supported 00:28:40.927 Namespace Granularity: Not Supported 00:28:40.927 SQ Associations: Not Supported 00:28:40.927 UUID List: Not Supported 00:28:40.927 Multi-Domain Subsystem: Not Supported 00:28:40.927 Fixed Capacity Management: Not Supported 00:28:40.927 Variable Capacity Management: Not Supported 00:28:40.927 Delete Endurance Group: Not Supported 00:28:40.927 Delete NVM Set: Not Supported 00:28:40.927 Extended LBA Formats Supported: Not Supported 00:28:40.927 Flexible Data Placement Supported: Not Supported 00:28:40.927 00:28:40.927 Controller Memory Buffer Support 00:28:40.927 ================================ 00:28:40.927 Supported: No 00:28:40.927 00:28:40.927 Persistent Memory Region Support 00:28:40.927 ================================ 00:28:40.927 Supported: No 00:28:40.927 00:28:40.927 Admin Command Set Attributes 00:28:40.927 ============================ 00:28:40.927 Security Send/Receive: Not Supported 00:28:40.927 Format NVM: Not Supported 00:28:40.927 Firmware Activate/Download: Not Supported 00:28:40.927 Namespace Management: Not Supported 00:28:40.927 Device Self-Test: Not Supported 00:28:40.927 Directives: Not Supported 00:28:40.927 NVMe-MI: Not Supported 00:28:40.927 Virtualization Management: Not Supported 00:28:40.927 Doorbell Buffer Config: Not Supported 00:28:40.927 Get LBA Status Capability: Not Supported 00:28:40.927 Command & Feature Lockdown Capability: Not Supported 00:28:40.927 Abort Command Limit: 4 00:28:40.927 Async Event Request Limit: 4 00:28:40.927 Number of Firmware Slots: N/A 00:28:40.927 Firmware Slot 1 Read-Only: N/A 00:28:40.927 Firmware Activation Without Reset: N/A 00:28:40.927 Multiple Update Detection Support: N/A 00:28:40.927 Firmware Update Granularity: No Information Provided 00:28:40.927 Per-Namespace SMART Log: No 00:28:40.927 Asymmetric Namespace Access Log Page: Not Supported 00:28:40.927 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:40.927 Command Effects Log Page: Supported 00:28:40.927 Get Log Page Extended Data: Supported 00:28:40.927 Telemetry Log Pages: Not Supported 00:28:40.927 Persistent Event Log Pages: Not Supported 00:28:40.927 Supported Log Pages Log Page: May Support 00:28:40.927 Commands Supported & Effects Log Page: Not Supported 00:28:40.927 Feature Identifiers & Effects Log Page:May Support 00:28:40.927 NVMe-MI Commands & Effects Log Page: May Support 00:28:40.927 Data Area 4 for Telemetry Log: Not Supported 00:28:40.927 Error Log Page Entries Supported: 128 00:28:40.927 Keep Alive: Supported 00:28:40.927 Keep Alive Granularity: 10000 ms 00:28:40.927 00:28:40.927 NVM Command Set Attributes 00:28:40.927 ========================== 00:28:40.927 Submission Queue Entry Size 00:28:40.927 Max: 64 00:28:40.927 Min: 64 00:28:40.927 Completion Queue Entry Size 00:28:40.927 Max: 16 00:28:40.927 Min: 16 00:28:40.927 Number of Namespaces: 32 00:28:40.927 Compare Command: Supported 00:28:40.927 Write Uncorrectable Command: Not Supported 00:28:40.927 Dataset Management Command: Supported 00:28:40.927 Write Zeroes Command: Supported 00:28:40.927 Set Features Save Field: Not Supported 00:28:40.927 Reservations: Supported 00:28:40.927 Timestamp: Not Supported 00:28:40.927 Copy: Supported 00:28:40.927 Volatile Write Cache: Present 00:28:40.927 Atomic Write Unit (Normal): 1 00:28:40.927 Atomic Write Unit (PFail): 1 00:28:40.927 Atomic Compare & Write Unit: 1 00:28:40.927 Fused Compare & Write: Supported 00:28:40.927 Scatter-Gather List 00:28:40.927 SGL Command Set: Supported 00:28:40.927 SGL Keyed: Supported 00:28:40.927 SGL Bit Bucket Descriptor: Not Supported 00:28:40.927 SGL Metadata Pointer: Not Supported 00:28:40.927 Oversized SGL: Not Supported 00:28:40.927 SGL Metadata Address: Not Supported 00:28:40.927 SGL Offset: Supported 00:28:40.927 Transport SGL Data Block: Not Supported 00:28:40.927 Replay Protected Memory Block: Not Supported 00:28:40.927 00:28:40.927 Firmware Slot Information 00:28:40.927 ========================= 00:28:40.927 Active slot: 1 00:28:40.927 Slot 1 Firmware Revision: 24.09 00:28:40.927 00:28:40.927 00:28:40.927 Commands Supported and Effects 00:28:40.927 ============================== 00:28:40.927 Admin Commands 00:28:40.927 -------------- 00:28:40.927 Get Log Page (02h): Supported 00:28:40.927 Identify (06h): Supported 00:28:40.927 Abort (08h): Supported 00:28:40.927 Set Features (09h): Supported 00:28:40.927 Get Features (0Ah): Supported 00:28:40.927 Asynchronous Event Request (0Ch): Supported 00:28:40.927 Keep Alive (18h): Supported 00:28:40.927 I/O Commands 00:28:40.927 ------------ 00:28:40.927 Flush (00h): Supported LBA-Change 00:28:40.927 Write (01h): Supported LBA-Change 00:28:40.927 Read (02h): Supported 00:28:40.927 Compare (05h): Supported 00:28:40.927 Write Zeroes (08h): Supported LBA-Change 00:28:40.927 Dataset Management (09h): Supported LBA-Change 00:28:40.927 Copy (19h): Supported LBA-Change 00:28:40.927 00:28:40.928 Error Log 00:28:40.928 ========= 00:28:40.928 00:28:40.928 Arbitration 00:28:40.928 =========== 00:28:40.928 Arbitration Burst: 1 00:28:40.928 00:28:40.928 Power Management 00:28:40.928 ================ 00:28:40.928 Number of Power States: 1 00:28:40.928 Current Power State: Power State #0 00:28:40.928 Power State #0: 00:28:40.928 Max Power: 0.00 W 00:28:40.928 Non-Operational State: Operational 00:28:40.928 Entry Latency: Not Reported 00:28:40.928 Exit Latency: Not Reported 00:28:40.928 Relative Read Throughput: 0 00:28:40.928 Relative Read Latency: 0 00:28:40.928 Relative Write Throughput: 0 00:28:40.928 Relative Write Latency: 0 00:28:40.928 Idle Power: Not Reported 00:28:40.928 Active Power: Not Reported 00:28:40.928 Non-Operational Permissive Mode: Not Supported 00:28:40.928 00:28:40.928 Health Information 00:28:40.928 ================== 00:28:40.928 Critical Warnings: 00:28:40.928 Available Spare Space: OK 00:28:40.928 Temperature: OK 00:28:40.928 Device Reliability: OK 00:28:40.928 Read Only: No 00:28:40.928 Volatile Memory Backup: OK 00:28:40.928 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:40.928 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:40.928 Available Spare: 0% 00:28:40.928 Available Spare Threshold: 0% 00:28:40.928 Life Percentage Used:[2024-07-11 11:14:55.151225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.151252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.151263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.151285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103acc0, cid 7, qid 0 00:28:40.928 [2024-07-11 11:14:55.151433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.151446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.151453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.151460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103acc0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.151507] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:40.928 [2024-07-11 11:14:55.151526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a240) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.151536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.928 [2024-07-11 11:14:55.151545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a3c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.151553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.928 [2024-07-11 11:14:55.151561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a540) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.151569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.928 [2024-07-11 11:14:55.151577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.151585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.928 [2024-07-11 11:14:55.151597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.151620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.151627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.151638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.151659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.928 [2024-07-11 11:14:55.155765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.155782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.155789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.155796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.155807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.155814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.155821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.155831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.155858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.928 [2024-07-11 11:14:55.156012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.156027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.156033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.156048] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:40.928 [2024-07-11 11:14:55.156055] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:40.928 [2024-07-11 11:14:55.156071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.156097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.156118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.928 [2024-07-11 11:14:55.156213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.156226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.156233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.156255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.156281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.156301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.928 [2024-07-11 11:14:55.156416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.156428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.156434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.156457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.156482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.156506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.928 [2024-07-11 11:14:55.156587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.156601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.156607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.156630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.928 [2024-07-11 11:14:55.156656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.928 [2024-07-11 11:14:55.156676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.928 [2024-07-11 11:14:55.156773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.928 [2024-07-11 11:14:55.156787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.928 [2024-07-11 11:14:55.156794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.928 [2024-07-11 11:14:55.156801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.928 [2024-07-11 11:14:55.156816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.156825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.156832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.156842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.156862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.156993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.157006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.157013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.157035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.157061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.157080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.157196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.157208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.157214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.157236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.157262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.157285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.157365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.157378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.157385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.157407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.157433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.157452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.157566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.157579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.157586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.157608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.157634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.157653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.157772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.157786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.157792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.157815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.157830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.157840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.157860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.157991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.158003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.158010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.158032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.158058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.158078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.158162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.158176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.158182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.158205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.158230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.158250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.158345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.158356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.158363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.158385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.158410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.158430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.158559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.158571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.158578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.158600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.158625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.158645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.158776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.158792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.158798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.158821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.158847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.158868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.158948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.158965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.158972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.158979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.158995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.159004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.159010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.159021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.159041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.159122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.159134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.159140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.159147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.929 [2024-07-11 11:14:55.159162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.159171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.159178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.929 [2024-07-11 11:14:55.159188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-11 11:14:55.159207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.929 [2024-07-11 11:14:55.159301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.929 [2024-07-11 11:14:55.159315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.929 [2024-07-11 11:14:55.159321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.929 [2024-07-11 11:14:55.159328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.930 [2024-07-11 11:14:55.159344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.930 [2024-07-11 11:14:55.159369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-11 11:14:55.159388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.930 [2024-07-11 11:14:55.159485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.930 [2024-07-11 11:14:55.159497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.930 [2024-07-11 11:14:55.159503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.930 [2024-07-11 11:14:55.159526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.930 [2024-07-11 11:14:55.159551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-11 11:14:55.159571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.930 [2024-07-11 11:14:55.159653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.930 [2024-07-11 11:14:55.159666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.930 [2024-07-11 11:14:55.159676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.930 [2024-07-11 11:14:55.159699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.159714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.930 [2024-07-11 11:14:55.159725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-11 11:14:55.159744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.930 [2024-07-11 11:14:55.163784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.930 [2024-07-11 11:14:55.163798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.930 [2024-07-11 11:14:55.163805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.163811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.930 [2024-07-11 11:14:55.163829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.163838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.163844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe3ae0) 00:28:40.930 [2024-07-11 11:14:55.163854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-11 11:14:55.163874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103a6c0, cid 3, qid 0 00:28:40.930 [2024-07-11 11:14:55.163998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:40.930 [2024-07-11 11:14:55.164012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:40.930 [2024-07-11 11:14:55.164019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:40.930 [2024-07-11 11:14:55.164025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x103a6c0) on tqpair=0xfe3ae0 00:28:40.930 [2024-07-11 11:14:55.164038] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:40.930 0% 00:28:40.930 Data Units Read: 0 00:28:40.930 Data Units Written: 0 00:28:40.930 Host Read Commands: 0 00:28:40.930 Host Write Commands: 0 00:28:40.930 Controller Busy Time: 0 minutes 00:28:40.930 Power Cycles: 0 00:28:40.930 Power On Hours: 0 hours 00:28:40.930 Unsafe Shutdowns: 0 00:28:40.930 Unrecoverable Media Errors: 0 00:28:40.930 Lifetime Error Log Entries: 0 00:28:40.930 Warning Temperature Time: 0 minutes 00:28:40.930 Critical Temperature Time: 0 minutes 00:28:40.930 00:28:40.930 Number of Queues 00:28:40.930 ================ 00:28:40.930 Number of I/O Submission Queues: 127 00:28:40.930 Number of I/O Completion Queues: 127 00:28:40.930 00:28:40.930 Active Namespaces 00:28:40.930 ================= 00:28:40.930 Namespace ID:1 00:28:40.930 Error Recovery Timeout: Unlimited 00:28:40.930 Command Set Identifier: NVM (00h) 00:28:40.930 Deallocate: Supported 00:28:40.930 Deallocated/Unwritten Error: Not Supported 00:28:40.930 Deallocated Read Value: Unknown 00:28:40.930 Deallocate in Write Zeroes: Not Supported 00:28:40.930 Deallocated Guard Field: 0xFFFF 00:28:40.930 Flush: Supported 00:28:40.930 Reservation: Supported 00:28:40.930 Namespace Sharing Capabilities: Multiple Controllers 00:28:40.930 Size (in LBAs): 131072 (0GiB) 00:28:40.930 Capacity (in LBAs): 131072 (0GiB) 00:28:40.930 Utilization (in LBAs): 131072 (0GiB) 00:28:40.930 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:40.930 EUI64: ABCDEF0123456789 00:28:40.930 UUID: 5d5a2e83-0835-41f5-963c-a50dcee595de 00:28:40.930 Thin Provisioning: Not Supported 00:28:40.930 Per-NS Atomic Units: Yes 00:28:40.930 Atomic Boundary Size (Normal): 0 00:28:40.930 Atomic Boundary Size (PFail): 0 00:28:40.930 Atomic Boundary Offset: 0 00:28:40.930 Maximum Single Source Range Length: 65535 00:28:40.930 Maximum Copy Length: 65535 00:28:40.930 Maximum Source Range Count: 1 00:28:40.930 NGUID/EUI64 Never Reused: No 00:28:40.930 Namespace Write Protected: No 00:28:40.930 Number of LBA Formats: 1 00:28:40.930 Current LBA Format: LBA Format #00 00:28:40.930 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:40.930 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.930 rmmod nvme_tcp 00:28:40.930 rmmod nvme_fabrics 00:28:40.930 rmmod nvme_keyring 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 343482 ']' 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 343482 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 343482 ']' 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 343482 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 343482 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 343482' 00:28:40.930 killing process with pid 343482 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 343482 00:28:40.930 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 343482 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.189 11:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.722 11:14:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:43.722 00:28:43.722 real 0m5.302s 00:28:43.722 user 0m4.372s 00:28:43.722 sys 0m1.808s 00:28:43.722 11:14:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.722 11:14:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:43.722 ************************************ 00:28:43.722 END TEST nvmf_identify 00:28:43.722 ************************************ 00:28:43.722 11:14:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:43.722 11:14:57 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:43.722 11:14:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:43.722 11:14:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.722 11:14:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.722 ************************************ 00:28:43.722 START TEST nvmf_perf 00:28:43.722 ************************************ 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:43.722 * Looking for test storage... 00:28:43.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.722 11:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:45.099 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:45.100 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:45.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:28:45.359 00:28:45.359 --- 10.0.0.2 ping statistics --- 00:28:45.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.359 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:28:45.359 00:28:45.359 --- 10.0.0.1 ping statistics --- 00:28:45.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.359 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=345475 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 345475 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 345475 ']' 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.359 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.360 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.360 11:14:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.360 [2024-07-11 11:14:59.718554] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:28:45.360 [2024-07-11 11:14:59.718642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.360 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.618 [2024-07-11 11:14:59.793021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.618 [2024-07-11 11:14:59.883810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.618 [2024-07-11 11:14:59.883861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.618 [2024-07-11 11:14:59.883891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.618 [2024-07-11 11:14:59.883903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.618 [2024-07-11 11:14:59.883915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.618 [2024-07-11 11:14:59.883980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.618 [2024-07-11 11:14:59.884036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.618 [2024-07-11 11:14:59.884087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.618 [2024-07-11 11:14:59.884085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:45.618 11:15:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:48.901 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:48.901 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:49.158 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:49.158 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:49.415 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:49.415 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:49.415 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:49.415 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:49.415 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:49.672 [2024-07-11 11:15:03.895056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.672 11:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.929 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:49.929 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:50.187 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:50.187 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:50.445 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.703 [2024-07-11 11:15:04.914730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.703 11:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:50.961 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:50.961 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:50.961 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:50.961 11:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:52.339 Initializing NVMe Controllers 00:28:52.339 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:52.339 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:52.339 Initialization complete. Launching workers. 00:28:52.339 ======================================================== 00:28:52.339 Latency(us) 00:28:52.339 Device Information : IOPS MiB/s Average min max 00:28:52.339 PCIE (0000:88:00.0) NSID 1 from core 0: 85333.55 333.33 374.29 11.86 8286.87 00:28:52.339 ======================================================== 00:28:52.339 Total : 85333.55 333.33 374.29 11.86 8286.87 00:28:52.339 00:28:52.339 11:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.339 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.712 Initializing NVMe Controllers 00:28:53.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.712 Initialization complete. Launching workers. 00:28:53.712 ======================================================== 00:28:53.712 Latency(us) 00:28:53.712 Device Information : IOPS MiB/s Average min max 00:28:53.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 107.73 0.42 9311.12 139.03 45450.13 00:28:53.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.86 0.21 18989.33 6939.66 47921.70 00:28:53.712 ======================================================== 00:28:53.712 Total : 161.59 0.63 12537.19 139.03 47921.70 00:28:53.712 00:28:53.712 11:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.712 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.084 Initializing NVMe Controllers 00:28:55.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:55.084 Initialization complete. Launching workers. 00:28:55.084 ======================================================== 00:28:55.084 Latency(us) 00:28:55.084 Device Information : IOPS MiB/s Average min max 00:28:55.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8542.91 33.37 3747.35 676.70 9232.80 00:28:55.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3824.32 14.94 8391.06 5980.60 16886.61 00:28:55.084 ======================================================== 00:28:55.084 Total : 12367.23 48.31 5183.33 676.70 16886.61 00:28:55.084 00:28:55.084 11:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:55.084 11:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:55.084 11:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.084 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.612 Initializing NVMe Controllers 00:28:57.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.612 Controller IO queue size 128, less than required. 00:28:57.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.612 Controller IO queue size 128, less than required. 00:28:57.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:57.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:57.612 Initialization complete. Launching workers. 00:28:57.612 ======================================================== 00:28:57.612 Latency(us) 00:28:57.612 Device Information : IOPS MiB/s Average min max 00:28:57.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1718.94 429.74 75406.49 48916.38 129227.40 00:28:57.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.95 147.49 230086.29 98222.78 332099.90 00:28:57.612 ======================================================== 00:28:57.612 Total : 2308.89 577.22 114929.08 48916.38 332099.90 00:28:57.612 00:28:57.612 11:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:57.612 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.872 No valid NVMe controllers or AIO or URING devices found 00:28:57.872 Initializing NVMe Controllers 00:28:57.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.872 Controller IO queue size 128, less than required. 00:28:57.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.872 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:57.872 Controller IO queue size 128, less than required. 00:28:57.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:57.872 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:57.872 WARNING: Some requested NVMe devices were skipped 00:28:57.872 11:15:12 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:57.872 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.405 Initializing NVMe Controllers 00:29:00.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.405 Controller IO queue size 128, less than required. 00:29:00.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.405 Controller IO queue size 128, less than required. 00:29:00.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:00.405 Initialization complete. Launching workers. 00:29:00.405 00:29:00.405 ==================== 00:29:00.405 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:00.405 TCP transport: 00:29:00.405 polls: 10151 00:29:00.405 idle_polls: 6282 00:29:00.405 sock_completions: 3869 00:29:00.405 nvme_completions: 6217 00:29:00.405 submitted_requests: 9300 00:29:00.405 queued_requests: 1 00:29:00.405 00:29:00.405 ==================== 00:29:00.405 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:00.405 TCP transport: 00:29:00.405 polls: 12864 00:29:00.405 idle_polls: 9461 00:29:00.405 sock_completions: 3403 00:29:00.405 nvme_completions: 6193 00:29:00.405 submitted_requests: 9332 00:29:00.405 queued_requests: 1 00:29:00.405 ======================================================== 00:29:00.405 Latency(us) 00:29:00.405 Device Information : IOPS MiB/s Average min max 00:29:00.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1552.65 388.16 85035.82 54363.51 153032.50 00:29:00.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1546.66 386.66 83271.51 41700.16 126613.84 00:29:00.405 ======================================================== 00:29:00.405 Total : 3099.31 774.83 84155.37 41700.16 153032.50 00:29:00.405 00:29:00.405 11:15:14 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:00.405 11:15:14 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.662 11:15:14 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:00.662 11:15:14 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:00.662 11:15:14 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=aeb349ef-0fef-4970-a4fb-8a3bda7155c7 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb aeb349ef-0fef-4970-a4fb-8a3bda7155c7 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=aeb349ef-0fef-4970-a4fb-8a3bda7155c7 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:03.946 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:03.946 { 00:29:03.946 "uuid": "aeb349ef-0fef-4970-a4fb-8a3bda7155c7", 00:29:03.946 "name": "lvs_0", 00:29:03.946 "base_bdev": "Nvme0n1", 00:29:03.946 "total_data_clusters": 238234, 00:29:03.946 "free_clusters": 238234, 00:29:03.946 "block_size": 512, 00:29:03.946 "cluster_size": 4194304 00:29:03.946 } 00:29:03.946 ]' 00:29:04.204 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="aeb349ef-0fef-4970-a4fb-8a3bda7155c7") .free_clusters' 00:29:04.204 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:04.204 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="aeb349ef-0fef-4970-a4fb-8a3bda7155c7") .cluster_size' 00:29:04.204 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:04.204 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:04.204 11:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:04.204 952936 00:29:04.205 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:04.205 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:04.205 11:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aeb349ef-0fef-4970-a4fb-8a3bda7155c7 lbd_0 20480 00:29:04.770 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8714ce9e-ef4b-4538-b27a-851efc442f24 00:29:04.770 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8714ce9e-ef4b-4538-b27a-851efc442f24 lvs_n_0 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=27fbc1b2-fc2f-454a-9063-296ff21c5b74 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 27fbc1b2-fc2f-454a-9063-296ff21c5b74 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=27fbc1b2-fc2f-454a-9063-296ff21c5b74 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:05.707 11:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:05.707 { 00:29:05.707 "uuid": "aeb349ef-0fef-4970-a4fb-8a3bda7155c7", 00:29:05.707 "name": "lvs_0", 00:29:05.707 "base_bdev": "Nvme0n1", 00:29:05.707 "total_data_clusters": 238234, 00:29:05.707 "free_clusters": 233114, 00:29:05.707 "block_size": 512, 00:29:05.707 "cluster_size": 4194304 00:29:05.707 }, 00:29:05.707 { 00:29:05.707 "uuid": "27fbc1b2-fc2f-454a-9063-296ff21c5b74", 00:29:05.707 "name": "lvs_n_0", 00:29:05.707 "base_bdev": "8714ce9e-ef4b-4538-b27a-851efc442f24", 00:29:05.707 "total_data_clusters": 5114, 00:29:05.707 "free_clusters": 5114, 00:29:05.707 "block_size": 512, 00:29:05.707 "cluster_size": 4194304 00:29:05.707 } 00:29:05.707 ]' 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="27fbc1b2-fc2f-454a-9063-296ff21c5b74") .free_clusters' 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="27fbc1b2-fc2f-454a-9063-296ff21c5b74") .cluster_size' 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:05.707 20456 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:05.707 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27fbc1b2-fc2f-454a-9063-296ff21c5b74 lbd_nest_0 20456 00:29:05.985 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=eaa7bac9-8eb7-4992-95ec-99631294fd1e 00:29:05.985 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.279 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:06.279 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 eaa7bac9-8eb7-4992-95ec-99631294fd1e 00:29:06.606 11:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.894 11:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:06.894 11:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:06.894 11:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:06.894 11:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:06.894 11:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.894 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.111 Initializing NVMe Controllers 00:29:19.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.111 Initialization complete. Launching workers. 00:29:19.111 ======================================================== 00:29:19.111 Latency(us) 00:29:19.111 Device Information : IOPS MiB/s Average min max 00:29:19.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.00 0.02 21306.89 165.90 46055.00 00:29:19.111 ======================================================== 00:29:19.111 Total : 47.00 0.02 21306.89 165.90 46055.00 00:29:19.111 00:29:19.111 11:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:19.111 11:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.111 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.089 Initializing NVMe Controllers 00:29:29.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.089 Initialization complete. Launching workers. 00:29:29.089 ======================================================== 00:29:29.089 Latency(us) 00:29:29.089 Device Information : IOPS MiB/s Average min max 00:29:29.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.30 9.66 12943.22 5021.50 47902.46 00:29:29.089 ======================================================== 00:29:29.089 Total : 77.30 9.66 12943.22 5021.50 47902.46 00:29:29.089 00:29:29.089 11:15:41 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:29.089 11:15:41 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:29.089 11:15:41 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.089 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.082 Initializing NVMe Controllers 00:29:39.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.082 Initialization complete. Launching workers. 00:29:39.082 ======================================================== 00:29:39.082 Latency(us) 00:29:39.082 Device Information : IOPS MiB/s Average min max 00:29:39.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7507.20 3.67 4261.70 298.33 11060.99 00:29:39.082 ======================================================== 00:29:39.082 Total : 7507.20 3.67 4261.70 298.33 11060.99 00:29:39.082 00:29:39.082 11:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:39.082 11:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.082 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.055 Initializing NVMe Controllers 00:29:49.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:49.055 Initialization complete. Launching workers. 00:29:49.055 ======================================================== 00:29:49.055 Latency(us) 00:29:49.055 Device Information : IOPS MiB/s Average min max 00:29:49.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3974.35 496.79 8052.74 813.03 16734.27 00:29:49.055 ======================================================== 00:29:49.055 Total : 3974.35 496.79 8052.74 813.03 16734.27 00:29:49.055 00:29:49.055 11:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:49.055 11:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:49.055 11:16:02 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.055 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.023 Initializing NVMe Controllers 00:29:59.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.023 Controller IO queue size 128, less than required. 00:29:59.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.023 Initialization complete. Launching workers. 00:29:59.023 ======================================================== 00:29:59.024 Latency(us) 00:29:59.024 Device Information : IOPS MiB/s Average min max 00:29:59.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11926.10 5.82 10737.25 1779.37 25392.17 00:29:59.024 ======================================================== 00:29:59.024 Total : 11926.10 5.82 10737.25 1779.37 25392.17 00:29:59.024 00:29:59.024 11:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.024 11:16:12 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.024 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.001 Initializing NVMe Controllers 00:30:09.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.001 Controller IO queue size 128, less than required. 00:30:09.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:09.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.001 Initialization complete. Launching workers. 00:30:09.001 ======================================================== 00:30:09.001 Latency(us) 00:30:09.001 Device Information : IOPS MiB/s Average min max 00:30:09.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1189.20 148.65 107879.56 10458.51 230644.65 00:30:09.001 ======================================================== 00:30:09.001 Total : 1189.20 148.65 107879.56 10458.51 230644.65 00:30:09.001 00:30:09.001 11:16:22 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.001 11:16:23 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eaa7bac9-8eb7-4992-95ec-99631294fd1e 00:30:09.565 11:16:23 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:09.823 11:16:24 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8714ce9e-ef4b-4538-b27a-851efc442f24 00:30:10.386 11:16:24 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:10.386 11:16:24 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:10.386 11:16:24 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:10.386 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.386 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.387 rmmod nvme_tcp 00:30:10.387 rmmod nvme_fabrics 00:30:10.387 rmmod nvme_keyring 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 345475 ']' 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 345475 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 345475 ']' 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 345475 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:10.387 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 345475 00:30:10.644 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:10.644 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:10.644 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 345475' 00:30:10.644 killing process with pid 345475 00:30:10.644 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 345475 00:30:10.644 11:16:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 345475 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.542 11:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.443 11:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:14.443 00:30:14.443 real 1m30.921s 00:30:14.443 user 5m36.762s 00:30:14.443 sys 0m15.586s 00:30:14.443 11:16:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.443 11:16:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:14.443 ************************************ 00:30:14.443 END TEST nvmf_perf 00:30:14.443 ************************************ 00:30:14.443 11:16:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:14.443 11:16:28 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:14.443 11:16:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:14.443 11:16:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.443 11:16:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.443 ************************************ 00:30:14.443 START TEST nvmf_fio_host 00:30:14.443 ************************************ 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:14.443 * Looking for test storage... 00:30:14.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.443 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:14.444 11:16:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:16.345 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:16.345 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:16.345 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:16.345 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.345 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:30:16.604 00:30:16.604 --- 10.0.0.2 ping statistics --- 00:30:16.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.604 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:16.604 00:30:16.604 --- 10.0.0.1 ping statistics --- 00:30:16.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.604 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=358173 00:30:16.604 11:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 358173 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 358173 ']' 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.605 11:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.605 [2024-07-11 11:16:30.846769] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:30:16.605 [2024-07-11 11:16:30.846836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.605 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.605 [2024-07-11 11:16:30.909365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.605 [2024-07-11 11:16:30.992604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.605 [2024-07-11 11:16:30.992654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.605 [2024-07-11 11:16:30.992678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.605 [2024-07-11 11:16:30.992690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.605 [2024-07-11 11:16:30.992700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.605 [2024-07-11 11:16:30.992781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.605 [2024-07-11 11:16:30.992845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.605 [2024-07-11 11:16:30.992896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.605 [2024-07-11 11:16:30.992899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.863 11:16:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:16.863 11:16:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:16.863 11:16:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:17.120 [2024-07-11 11:16:31.331204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.120 11:16:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:17.120 11:16:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:17.120 11:16:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.120 11:16:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:17.379 Malloc1 00:30:17.379 11:16:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.636 11:16:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:17.894 11:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.152 [2024-07-11 11:16:32.345535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.152 11:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:18.410 11:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.668 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:18.668 fio-3.35 00:30:18.668 Starting 1 thread 00:30:18.668 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.205 00:30:21.205 test: (groupid=0, jobs=1): err= 0: pid=358531: Thu Jul 11 11:16:35 2024 00:30:21.205 read: IOPS=8184, BW=32.0MiB/s (33.5MB/s)(65.5MiB/2048msec) 00:30:21.205 slat (nsec): min=1996, max=150405, avg=2614.78, stdev=1919.38 00:30:21.205 clat (usec): min=2283, max=55559, avg=8518.40, stdev=2956.69 00:30:21.205 lat (usec): min=2313, max=55561, avg=8521.02, stdev=2956.67 00:30:21.205 clat percentiles (usec): 00:30:21.205 | 1.00th=[ 6783], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 7767], 00:30:21.205 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8455], 00:30:21.205 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:30:21.205 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[53740], 99.95th=[54264], 00:30:21.205 | 99.99th=[55313] 00:30:21.205 bw ( KiB/s): min=32616, max=33728, per=100.00%, avg=33376.00, stdev=518.99, samples=4 00:30:21.205 iops : min= 8154, max= 8432, avg=8344.00, stdev=129.75, samples=4 00:30:21.205 write: IOPS=8186, BW=32.0MiB/s (33.5MB/s)(65.5MiB/2048msec); 0 zone resets 00:30:21.205 slat (usec): min=2, max=133, avg= 2.76, stdev= 1.40 00:30:21.205 clat (usec): min=1635, max=53981, avg=7042.35, stdev=2521.95 00:30:21.205 lat (usec): min=1644, max=53984, avg=7045.11, stdev=2521.93 00:30:21.205 clat percentiles (usec): 00:30:21.205 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:30:21.205 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:30:21.205 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:30:21.205 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[52167], 99.95th=[53216], 00:30:21.205 | 99.99th=[53740] 00:30:21.205 bw ( KiB/s): min=33088, max=33856, per=100.00%, avg=33424.00, stdev=319.47, samples=4 00:30:21.205 iops : min= 8272, max= 8464, avg=8356.00, stdev=79.87, samples=4 00:30:21.205 lat (msec) : 2=0.02%, 4=0.11%, 10=99.37%, 20=0.12%, 50=0.13% 00:30:21.205 lat (msec) : 100=0.25% 00:30:21.205 cpu : usr=61.99%, sys=36.35%, ctx=125, majf=0, minf=32 00:30:21.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:21.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.205 issued rwts: total=16762,16765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.205 00:30:21.205 Run status group 0 (all jobs): 00:30:21.205 READ: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=65.5MiB (68.7MB), run=2048-2048msec 00:30:21.205 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=65.5MiB (68.7MB), run=2048-2048msec 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:21.205 11:16:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.205 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:21.205 fio-3.35 00:30:21.205 Starting 1 thread 00:30:21.205 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.735 00:30:23.735 test: (groupid=0, jobs=1): err= 0: pid=358866: Thu Jul 11 11:16:37 2024 00:30:23.735 read: IOPS=7709, BW=120MiB/s (126MB/s)(242MiB/2009msec) 00:30:23.735 slat (usec): min=2, max=106, avg= 3.83, stdev= 1.97 00:30:23.735 clat (usec): min=1884, max=18554, avg=9359.07, stdev=2065.99 00:30:23.735 lat (usec): min=1887, max=18557, avg=9362.91, stdev=2065.99 00:30:23.735 clat percentiles (usec): 00:30:23.735 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7767], 00:30:23.735 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:30:23.735 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11863], 95.00th=[12911], 00:30:23.735 | 99.00th=[15664], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:30:23.735 | 99.99th=[17171] 00:30:23.735 bw ( KiB/s): min=56672, max=68512, per=51.25%, avg=63216.00, stdev=5792.29, samples=4 00:30:23.735 iops : min= 3542, max= 4282, avg=3951.00, stdev=362.02, samples=4 00:30:23.735 write: IOPS=4544, BW=71.0MiB/s (74.5MB/s)(129MiB/1819msec); 0 zone resets 00:30:23.735 slat (usec): min=30, max=194, avg=34.47, stdev= 6.13 00:30:23.735 clat (usec): min=7512, max=22118, avg=12590.84, stdev=2068.98 00:30:23.735 lat (usec): min=7563, max=22150, avg=12625.30, stdev=2069.03 00:30:23.735 clat percentiles (usec): 00:30:23.735 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:30:23.735 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:30:23.735 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15401], 95.00th=[16319], 00:30:23.735 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21627], 99.95th=[21890], 00:30:23.735 | 99.99th=[22152] 00:30:23.735 bw ( KiB/s): min=58464, max=71584, per=90.47%, avg=65776.00, stdev=6387.75, samples=4 00:30:23.735 iops : min= 3654, max= 4474, avg=4111.00, stdev=399.23, samples=4 00:30:23.735 lat (msec) : 2=0.01%, 4=0.15%, 10=45.70%, 20=54.08%, 50=0.05% 00:30:23.735 cpu : usr=76.79%, sys=21.91%, ctx=68, majf=0, minf=56 00:30:23.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:23.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:23.735 issued rwts: total=15488,8266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:23.735 00:30:23.735 Run status group 0 (all jobs): 00:30:23.735 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=242MiB (254MB), run=2009-2009msec 00:30:23.735 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=129MiB (135MB), run=1819-1819msec 00:30:23.735 11:16:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:23.735 11:16:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:27.013 Nvme0n1 00:30:27.014 11:16:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cb0a8548-7bdd-42aa-8f7d-332d552bd915 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cb0a8548-7bdd-42aa-8f7d-332d552bd915 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=cb0a8548-7bdd-42aa-8f7d-332d552bd915 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:30.297 { 00:30:30.297 "uuid": "cb0a8548-7bdd-42aa-8f7d-332d552bd915", 00:30:30.297 "name": "lvs_0", 00:30:30.297 "base_bdev": "Nvme0n1", 00:30:30.297 "total_data_clusters": 930, 00:30:30.297 "free_clusters": 930, 00:30:30.297 "block_size": 512, 00:30:30.297 "cluster_size": 1073741824 00:30:30.297 } 00:30:30.297 ]' 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cb0a8548-7bdd-42aa-8f7d-332d552bd915") .free_clusters' 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cb0a8548-7bdd-42aa-8f7d-332d552bd915") .cluster_size' 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:30.297 952320 00:30:30.297 11:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:30.554 8f7e9370-cf68-4434-90ce-4a7d5bd46509 00:30:30.555 11:16:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:30.812 11:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:31.069 11:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:31.326 11:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.583 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:31.583 fio-3.35 00:30:31.583 Starting 1 thread 00:30:31.583 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.107 00:30:34.107 test: (groupid=0, jobs=1): err= 0: pid=360142: Thu Jul 11 11:16:48 2024 00:30:34.107 read: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec) 00:30:34.107 slat (usec): min=2, max=157, avg= 2.89, stdev= 2.57 00:30:34.107 clat (usec): min=1249, max=171472, avg=12167.04, stdev=11886.21 00:30:34.107 lat (usec): min=1253, max=171506, avg=12169.94, stdev=11886.58 00:30:34.107 clat percentiles (msec): 00:30:34.107 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:30:34.107 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:30:34.107 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 13], 00:30:34.107 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:34.107 | 99.99th=[ 171] 00:30:34.107 bw ( KiB/s): min=15768, max=25632, per=99.92%, avg=22816.00, stdev=4712.33, samples=4 00:30:34.107 iops : min= 3942, max= 6408, avg=5704.00, stdev=1178.08, samples=4 00:30:34.107 write: IOPS=5690, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2009msec); 0 zone resets 00:30:34.107 slat (usec): min=2, max=146, avg= 3.00, stdev= 2.09 00:30:34.107 clat (usec): min=343, max=169164, avg=10099.99, stdev=11151.42 00:30:34.107 lat (usec): min=346, max=169173, avg=10102.99, stdev=11151.80 00:30:34.107 clat percentiles (msec): 00:30:34.107 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:30:34.107 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:30:34.107 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:30:34.107 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:30:34.107 | 99.99th=[ 169] 00:30:34.107 bw ( KiB/s): min=16680, max=24896, per=99.86%, avg=22730.00, stdev=4034.80, samples=4 00:30:34.107 iops : min= 4170, max= 6224, avg=5682.50, stdev=1008.70, samples=4 00:30:34.107 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:34.107 lat (msec) : 2=0.03%, 4=0.12%, 10=45.80%, 20=53.47%, 250=0.56% 00:30:34.107 cpu : usr=58.07%, sys=40.54%, ctx=98, majf=0, minf=32 00:30:34.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:34.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:34.107 issued rwts: total=11468,11432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:34.107 00:30:34.107 Run status group 0 (all jobs): 00:30:34.107 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:30:34.107 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2009-2009msec 00:30:34.107 11:16:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:34.107 11:16:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fbeb846f-5290-4b97-8425-af018aa86150 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fbeb846f-5290-4b97-8425-af018aa86150 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=fbeb846f-5290-4b97-8425-af018aa86150 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:35.479 { 00:30:35.479 "uuid": "cb0a8548-7bdd-42aa-8f7d-332d552bd915", 00:30:35.479 "name": "lvs_0", 00:30:35.479 "base_bdev": "Nvme0n1", 00:30:35.479 "total_data_clusters": 930, 00:30:35.479 "free_clusters": 0, 00:30:35.479 "block_size": 512, 00:30:35.479 "cluster_size": 1073741824 00:30:35.479 }, 00:30:35.479 { 00:30:35.479 "uuid": "fbeb846f-5290-4b97-8425-af018aa86150", 00:30:35.479 "name": "lvs_n_0", 00:30:35.479 "base_bdev": "8f7e9370-cf68-4434-90ce-4a7d5bd46509", 00:30:35.479 "total_data_clusters": 237847, 00:30:35.479 "free_clusters": 237847, 00:30:35.479 "block_size": 512, 00:30:35.479 "cluster_size": 4194304 00:30:35.479 } 00:30:35.479 ]' 00:30:35.479 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fbeb846f-5290-4b97-8425-af018aa86150") .free_clusters' 00:30:35.736 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:35.736 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fbeb846f-5290-4b97-8425-af018aa86150") .cluster_size' 00:30:35.736 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:35.736 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:35.736 11:16:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:35.736 951388 00:30:35.736 11:16:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:36.301 96aa4b4f-f7cc-4000-8915-299e7f0f690e 00:30:36.301 11:16:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:36.558 11:16:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:36.816 11:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:37.074 11:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:37.360 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:37.360 fio-3.35 00:30:37.360 Starting 1 thread 00:30:37.360 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.880 00:30:39.880 test: (groupid=0, jobs=1): err= 0: pid=360886: Thu Jul 11 11:16:53 2024 00:30:39.880 read: IOPS=5833, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec) 00:30:39.880 slat (nsec): min=1987, max=190902, avg=2782.28, stdev=2611.94 00:30:39.880 clat (usec): min=4673, max=19477, avg=11994.36, stdev=1087.67 00:30:39.880 lat (usec): min=4683, max=19479, avg=11997.15, stdev=1087.54 00:30:39.880 clat percentiles (usec): 00:30:39.880 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:30:39.880 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:30:39.880 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:30:39.880 | 99.00th=[14353], 99.50th=[14484], 99.90th=[17695], 99.95th=[17957], 00:30:39.880 | 99.99th=[19530] 00:30:39.880 bw ( KiB/s): min=21848, max=23952, per=99.87%, avg=23304.00, stdev=981.43, samples=4 00:30:39.880 iops : min= 5462, max= 5988, avg=5826.00, stdev=245.36, samples=4 00:30:39.880 write: IOPS=5820, BW=22.7MiB/s (23.8MB/s)(45.7MiB/2009msec); 0 zone resets 00:30:39.880 slat (usec): min=2, max=136, avg= 2.90, stdev= 1.84 00:30:39.880 clat (usec): min=2320, max=17646, avg=9756.03, stdev=899.11 00:30:39.880 lat (usec): min=2328, max=17648, avg=9758.93, stdev=899.04 00:30:39.880 clat percentiles (usec): 00:30:39.880 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:39.880 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:30:39.880 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:39.880 | 99.00th=[11731], 99.50th=[11994], 99.90th=[15270], 99.95th=[16581], 00:30:39.880 | 99.99th=[16712] 00:30:39.880 bw ( KiB/s): min=22936, max=23424, per=99.94%, avg=23270.00, stdev=230.70, samples=4 00:30:39.880 iops : min= 5734, max= 5856, avg=5817.50, stdev=57.67, samples=4 00:30:39.880 lat (msec) : 4=0.05%, 10=32.21%, 20=67.74% 00:30:39.880 cpu : usr=62.40%, sys=36.16%, ctx=92, majf=0, minf=32 00:30:39.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:39.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:39.880 issued rwts: total=11720,11694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:39.880 00:30:39.880 Run status group 0 (all jobs): 00:30:39.880 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.0MB), run=2009-2009msec 00:30:39.880 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.7MiB (47.9MB), run=2009-2009msec 00:30:39.880 11:16:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:39.880 11:16:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:39.880 11:16:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:44.059 11:16:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:44.059 11:16:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:47.339 11:17:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:47.339 11:17:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.234 rmmod nvme_tcp 00:30:49.234 rmmod nvme_fabrics 00:30:49.234 rmmod nvme_keyring 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 358173 ']' 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 358173 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 358173 ']' 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 358173 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 358173 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 358173' 00:30:49.234 killing process with pid 358173 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 358173 00:30:49.234 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 358173 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.493 11:17:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.397 11:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:51.397 00:30:51.397 real 0m37.162s 00:30:51.397 user 2m21.926s 00:30:51.397 sys 0m7.437s 00:30:51.397 11:17:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.397 11:17:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.397 ************************************ 00:30:51.397 END TEST nvmf_fio_host 00:30:51.397 ************************************ 00:30:51.397 11:17:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:51.397 11:17:05 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:51.397 11:17:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:51.397 11:17:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.397 11:17:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.397 ************************************ 00:30:51.397 START TEST nvmf_failover 00:30:51.397 ************************************ 00:30:51.397 11:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:51.656 * Looking for test storage... 00:30:51.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:51.656 11:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:53.556 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:53.556 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:53.556 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:53.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:53.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:53.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:30:53.557 00:30:53.557 --- 10.0.0.2 ping statistics --- 00:30:53.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.557 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:53.557 00:30:53.557 --- 10.0.0.1 ping statistics --- 00:30:53.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.557 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=364229 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 364229 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 364229 ']' 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.557 11:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:53.815 [2024-07-11 11:17:08.013002] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:30:53.815 [2024-07-11 11:17:08.013101] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.815 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.815 [2024-07-11 11:17:08.078031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:53.815 [2024-07-11 11:17:08.166764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.815 [2024-07-11 11:17:08.166840] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.815 [2024-07-11 11:17:08.166854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.815 [2024-07-11 11:17:08.166880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.815 [2024-07-11 11:17:08.166891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.815 [2024-07-11 11:17:08.166989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.815 [2024-07-11 11:17:08.167043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:53.815 [2024-07-11 11:17:08.167046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.073 11:17:08 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:54.331 [2024-07-11 11:17:08.526839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.331 11:17:08 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:54.589 Malloc0 00:30:54.589 11:17:08 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:54.847 11:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.105 11:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.362 [2024-07-11 11:17:09.613645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.362 11:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:55.620 [2024-07-11 11:17:09.854309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:55.620 11:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:55.877 [2024-07-11 11:17:10.107374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:55.877 11:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=364485 00:30:55.877 11:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:55.877 11:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 364485 /var/tmp/bdevperf.sock 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 364485 ']' 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.878 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:56.135 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:56.135 11:17:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:56.135 11:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.392 NVMe0n1 00:30:56.392 11:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.957 00:30:56.957 11:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=364545 00:30:56.957 11:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:56.957 11:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:57.895 11:17:12 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.154 [2024-07-11 11:17:12.358827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.154 [2024-07-11 11:17:12.358933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.154 [2024-07-11 11:17:12.358965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.154 [2024-07-11 11:17:12.358978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.154 [2024-07-11 11:17:12.358990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 [2024-07-11 11:17:12.359143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2270 is same with the state(5) to be set 00:30:58.155 11:17:12 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:01.439 11:17:15 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.439 00:31:01.439 11:17:15 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.698 11:17:15 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:04.987 11:17:18 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.987 [2024-07-11 11:17:19.233444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.987 11:17:19 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:06.012 11:17:20 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:06.276 11:17:20 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 364545 00:31:12.848 0 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 364485 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 364485 ']' 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 364485 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 364485 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 364485' 00:31:12.848 killing process with pid 364485 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 364485 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 364485 00:31:12.848 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:12.848 [2024-07-11 11:17:10.170895] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:31:12.848 [2024-07-11 11:17:10.170990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364485 ] 00:31:12.848 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.848 [2024-07-11 11:17:10.230240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.848 [2024-07-11 11:17:10.316330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.848 Running I/O for 15 seconds... 00:31:12.848 [2024-07-11 11:17:12.359729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.359818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.359852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.359882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.359912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.359941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.359970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.359984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.360014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.360043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.848 [2024-07-11 11:17:12.360525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.848 [2024-07-11 11:17:12.360778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.848 [2024-07-11 11:17:12.360797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.360826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.360855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.360887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.360918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.360947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.360976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.360989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.849 [2024-07-11 11:17:12.361981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.849 [2024-07-11 11:17:12.361996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.850 [2024-07-11 11:17:12.362944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.362976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.850 [2024-07-11 11:17:12.362993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77328 len:8 PRP1 0x0 PRP2 0x0 00:31:12.850 [2024-07-11 11:17:12.363006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.363023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.850 [2024-07-11 11:17:12.363042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.850 [2024-07-11 11:17:12.363053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77336 len:8 PRP1 0x0 PRP2 0x0 00:31:12.850 [2024-07-11 11:17:12.363065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.363079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.850 [2024-07-11 11:17:12.363089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.850 [2024-07-11 11:17:12.363101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77344 len:8 PRP1 0x0 PRP2 0x0 00:31:12.850 [2024-07-11 11:17:12.363113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.363126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.850 [2024-07-11 11:17:12.363137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.850 [2024-07-11 11:17:12.363148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77352 len:8 PRP1 0x0 PRP2 0x0 00:31:12.850 [2024-07-11 11:17:12.363161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.363180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.850 [2024-07-11 11:17:12.363192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.850 [2024-07-11 11:17:12.363210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:31:12.850 [2024-07-11 11:17:12.363223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.850 [2024-07-11 11:17:12.363236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.850 [2024-07-11 11:17:12.363247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.363949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.363962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.363979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.363990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.364001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.364014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.364038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.364049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.364061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.851 [2024-07-11 11:17:12.364086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.851 [2024-07-11 11:17:12.364097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:31:12.851 [2024-07-11 11:17:12.364110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364170] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c4760 was disconnected and freed. reset controller. 00:31:12.851 [2024-07-11 11:17:12.364190] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:12.851 [2024-07-11 11:17:12.364224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.851 [2024-07-11 11:17:12.364241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.851 [2024-07-11 11:17:12.364269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.851 [2024-07-11 11:17:12.364297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.851 [2024-07-11 11:17:12.364323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:12.364345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.851 [2024-07-11 11:17:12.364392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690830 (9): Bad file descriptor 00:31:12.851 [2024-07-11 11:17:12.367609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.851 [2024-07-11 11:17:12.492783] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.851 [2024-07-11 11:17:15.943401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.851 [2024-07-11 11:17:15.943470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:15.943526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.851 [2024-07-11 11:17:15.943543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:15.943560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.851 [2024-07-11 11:17:15.943575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:15.943590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.851 [2024-07-11 11:17:15.943604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:15.943619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.851 [2024-07-11 11:17:15.943633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.851 [2024-07-11 11:17:15.943648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.943978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.943991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.852 [2024-07-11 11:17:15.944618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.852 [2024-07-11 11:17:15.944631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.944975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.944989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.853 [2024-07-11 11:17:15.945646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.853 [2024-07-11 11:17:15.945769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.853 [2024-07-11 11:17:15.945789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.945977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.945992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.854 [2024-07-11 11:17:15.946598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88248 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.946956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.946969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.946982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.946993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.947004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.947017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.854 [2024-07-11 11:17:15.947029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.854 [2024-07-11 11:17:15.947040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.854 [2024-07-11 11:17:15.947051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:31:12.854 [2024-07-11 11:17:15.947063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.855 [2024-07-11 11:17:15.947725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.855 [2024-07-11 11:17:15.947736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:31:12.855 [2024-07-11 11:17:15.947749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947820] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c0300 was disconnected and freed. reset controller. 00:31:12.855 [2024-07-11 11:17:15.947839] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:12.855 [2024-07-11 11:17:15.947873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.855 [2024-07-11 11:17:15.947891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.855 [2024-07-11 11:17:15.947942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.855 [2024-07-11 11:17:15.947970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.947984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.855 [2024-07-11 11:17:15.947997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:15.948010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.855 [2024-07-11 11:17:15.948052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690830 (9): Bad file descriptor 00:31:12.855 [2024-07-11 11:17:15.951262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.855 [2024-07-11 11:17:16.119815] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.855 [2024-07-11 11:17:20.498391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.855 [2024-07-11 11:17:20.498470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.855 [2024-07-11 11:17:20.498817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.855 [2024-07-11 11:17:20.498830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.498845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.498859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.498875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.498888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.498903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.498917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.498933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.498947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.498962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.498975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.498991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.856 [2024-07-11 11:17:20.499883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.856 [2024-07-11 11:17:20.499899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.857 [2024-07-11 11:17:20.499913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.499928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.857 [2024-07-11 11:17:20.499941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.499956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.857 [2024-07-11 11:17:20.499971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.499986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.857 [2024-07-11 11:17:20.499999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.500973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.501001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.501017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.501030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.501046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.501060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.857 [2024-07-11 11:17:20.501075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.857 [2024-07-11 11:17:20.501089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.858 [2024-07-11 11:17:20.501408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46872 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46880 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46888 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46896 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46904 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46912 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46920 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46928 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46936 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46944 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.501961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46952 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.501974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.501987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.501998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46960 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.502021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.502038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.502050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46968 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.502079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.502093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.502104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46976 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.502141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.502152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46984 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.502176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.502189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.502200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46992 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.502224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.502237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.502248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47000 len:8 PRP1 0x0 PRP2 0x0 00:31:12.858 [2024-07-11 11:17:20.502272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.858 [2024-07-11 11:17:20.502286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.858 [2024-07-11 11:17:20.502297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.858 [2024-07-11 11:17:20.502307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47008 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47016 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47024 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47032 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47040 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47048 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46424 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46432 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46440 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46448 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46456 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46464 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:12.859 [2024-07-11 11:17:20.502901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:12.859 [2024-07-11 11:17:20.502912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46472 len:8 PRP1 0x0 PRP2 0x0 00:31:12.859 [2024-07-11 11:17:20.502924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.502990] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6c0300 was disconnected and freed. reset controller. 00:31:12.859 [2024-07-11 11:17:20.503010] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:12.859 [2024-07-11 11:17:20.503044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.859 [2024-07-11 11:17:20.503062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.503078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.859 [2024-07-11 11:17:20.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.503105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.859 [2024-07-11 11:17:20.503118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.503138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:12.859 [2024-07-11 11:17:20.503151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.859 [2024-07-11 11:17:20.503164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:12.859 [2024-07-11 11:17:20.503219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x690830 (9): Bad file descriptor 00:31:12.859 [2024-07-11 11:17:20.506448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:12.859 [2024-07-11 11:17:20.671881] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.859 00:31:12.859 Latency(us) 00:31:12.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.859 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:12.859 Verification LBA range: start 0x0 length 0x4000 00:31:12.859 NVMe0n1 : 15.01 8531.18 33.32 922.58 0.00 13511.62 621.99 18155.90 00:31:12.859 =================================================================================================================== 00:31:12.859 Total : 8531.18 33.32 922.58 0.00 13511.62 621.99 18155.90 00:31:12.859 Received shutdown signal, test time was about 15.000000 seconds 00:31:12.859 00:31:12.859 Latency(us) 00:31:12.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.859 =================================================================================================================== 00:31:12.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=366385 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 366385 /var/tmp/bdevperf.sock 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 366385 ']' 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:12.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:12.859 11:17:26 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:12.859 [2024-07-11 11:17:27.036998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:12.859 11:17:27 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:13.118 [2024-07-11 11:17:27.289646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:13.118 11:17:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:13.377 NVMe0n1 00:31:13.377 11:17:27 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:13.942 00:31:13.942 11:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.199 00:31:14.199 11:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:14.199 11:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:14.456 11:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:14.715 11:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:18.001 11:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:18.001 11:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:18.001 11:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=367056 00:31:18.001 11:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:18.001 11:17:32 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 367056 00:31:19.376 0 00:31:19.376 11:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.376 [2024-07-11 11:17:26.565095] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:31:19.376 [2024-07-11 11:17:26.565186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366385 ] 00:31:19.376 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.376 [2024-07-11 11:17:26.625891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.376 [2024-07-11 11:17:26.709069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.376 [2024-07-11 11:17:29.052221] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:19.376 [2024-07-11 11:17:29.052305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.376 [2024-07-11 11:17:29.052328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.376 [2024-07-11 11:17:29.052360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.376 [2024-07-11 11:17:29.052374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.376 [2024-07-11 11:17:29.052388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.376 [2024-07-11 11:17:29.052401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.376 [2024-07-11 11:17:29.052415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.376 [2024-07-11 11:17:29.052429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.376 [2024-07-11 11:17:29.052442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.376 [2024-07-11 11:17:29.052485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.376 [2024-07-11 11:17:29.052516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0e830 (9): Bad file descriptor 00:31:19.376 [2024-07-11 11:17:29.184874] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:19.376 Running I/O for 1 seconds... 00:31:19.376 00:31:19.376 Latency(us) 00:31:19.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.376 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:19.376 Verification LBA range: start 0x0 length 0x4000 00:31:19.376 NVMe0n1 : 1.02 8642.15 33.76 0.00 0.00 14746.73 3398.16 14757.74 00:31:19.376 =================================================================================================================== 00:31:19.376 Total : 8642.15 33.76 0.00 0.00 14746.73 3398.16 14757.74 00:31:19.376 11:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.376 11:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:19.376 11:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:19.634 11:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:19.634 11:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:19.891 11:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.456 11:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 366385 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 366385 ']' 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 366385 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 366385 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 366385' 00:31:23.739 killing process with pid 366385 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 366385 00:31:23.739 11:17:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 366385 00:31:23.739 11:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:23.739 11:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:23.996 rmmod nvme_tcp 00:31:23.996 rmmod nvme_fabrics 00:31:23.996 rmmod nvme_keyring 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 364229 ']' 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 364229 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 364229 ']' 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 364229 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 364229 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 364229' 00:31:23.996 killing process with pid 364229 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 364229 00:31:23.996 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 364229 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.255 11:17:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.788 11:17:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:26.788 00:31:26.788 real 0m34.877s 00:31:26.788 user 2m3.211s 00:31:26.788 sys 0m5.730s 00:31:26.788 11:17:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:26.788 11:17:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:26.788 ************************************ 00:31:26.788 END TEST nvmf_failover 00:31:26.788 ************************************ 00:31:26.788 11:17:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:26.788 11:17:40 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:26.788 11:17:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:26.788 11:17:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.788 11:17:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:26.788 ************************************ 00:31:26.788 START TEST nvmf_host_discovery 00:31:26.788 ************************************ 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:26.788 * Looking for test storage... 00:31:26.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.788 11:17:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:26.789 11:17:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:28.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:28.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:28.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:28.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:28.695 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:28.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:31:28.696 00:31:28.696 --- 10.0.0.2 ping statistics --- 00:31:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.696 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:31:28.696 00:31:28.696 --- 10.0.0.1 ping statistics --- 00:31:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.696 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=369653 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 369653 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 369653 ']' 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.696 11:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.696 [2024-07-11 11:17:42.917322] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:31:28.696 [2024-07-11 11:17:42.917404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.696 [2024-07-11 11:17:42.980889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.696 [2024-07-11 11:17:43.067080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.696 [2024-07-11 11:17:43.067128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.696 [2024-07-11 11:17:43.067157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.696 [2024-07-11 11:17:43.067169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.696 [2024-07-11 11:17:43.067179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.696 [2024-07-11 11:17:43.067205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.955 [2024-07-11 11:17:43.207112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.955 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 [2024-07-11 11:17:43.215302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 null0 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 null1 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=369788 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 369788 /tmp/host.sock 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 369788 ']' 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:28.956 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:28.956 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 [2024-07-11 11:17:43.284972] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:31:28.956 [2024-07-11 11:17:43.285040] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369788 ] 00:31:28.956 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.956 [2024-07-11 11:17:43.341389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.214 [2024-07-11 11:17:43.427447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.214 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 [2024-07-11 11:17:43.832956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:29.472 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:29.729 11:17:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.729 11:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:29.729 11:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:30.293 [2024-07-11 11:17:44.605917] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:30.293 [2024-07-11 11:17:44.605941] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:30.293 [2024-07-11 11:17:44.605969] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.293 [2024-07-11 11:17:44.692271] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:30.553 [2024-07-11 11:17:44.877009] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.553 [2024-07-11 11:17:44.877032] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:30.812 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:30.813 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.073 [2024-07-11 11:17:45.281062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:31.073 [2024-07-11 11:17:45.281567] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:31.073 [2024-07-11 11:17:45.281612] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:31.073 [2024-07-11 11:17:45.368872] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:31.073 11:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:31.073 [2024-07-11 11:17:45.428312] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:31.073 [2024-07-11 11:17:45.428334] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:31.073 [2024-07-11 11:17:45.428344] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:32.008 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.268 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:32.268 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.268 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:32.268 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:32.268 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:32.268 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.269 [2024-07-11 11:17:46.501877] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:32.269 [2024-07-11 11:17:46.501919] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:32.269 [2024-07-11 11:17:46.508594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.269 [2024-07-11 11:17:46.508627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.269 [2024-07-11 11:17:46.508680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.269 [2024-07-11 11:17:46.508698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.269 [2024-07-11 11:17:46.508712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.269 [2024-07-11 11:17:46.508725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.269 [2024-07-11 11:17:46.508750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:32.269 [2024-07-11 11:17:46.508775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:32.269 [2024-07-11 11:17:46.508790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.269 [2024-07-11 11:17:46.518588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.528628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.528838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.528869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.528887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.528912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.528946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.528964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.528982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.529002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 [2024-07-11 11:17:46.538719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.538928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.538956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.538972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.538994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.539033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.539050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.539064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.539083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:32.269 [2024-07-11 11:17:46.548807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:32.269 [2024-07-11 11:17:46.549848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.549879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.549897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.549921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.549957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.549976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.549990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.550010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 [2024-07-11 11:17:46.558896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.559063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.559091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.559107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.559130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.559151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.559164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.559183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.559203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 [2024-07-11 11:17:46.568985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.569144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.569172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.569187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.569209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.569229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.569242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.569255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.569273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.269 [2024-07-11 11:17:46.579070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.579288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.579316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.579331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.579353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.579373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.579387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.579399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.579417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:32.269 [2024-07-11 11:17:46.589138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.589306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.589334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.589351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.589374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.589394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.589408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.589421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.589440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.269 [2024-07-11 11:17:46.599225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.599394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.599422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.599438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.599460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.599480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.599494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.599507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.599525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 [2024-07-11 11:17:46.609307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.609514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.609541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.609557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.609579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.609611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.609628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.609641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.609660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 [2024-07-11 11:17:46.619387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.619577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.619604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.619620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.619641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.619673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.619691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.269 [2024-07-11 11:17:46.619704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.269 [2024-07-11 11:17:46.619722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:32.269 11:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:32.269 [2024-07-11 11:17:46.629472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.269 [2024-07-11 11:17:46.629634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.269 [2024-07-11 11:17:46.629661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1277530 with addr=10.0.0.2, port=4420 00:31:32.269 [2024-07-11 11:17:46.629677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1277530 is same with the state(5) to be set 00:31:32.269 [2024-07-11 11:17:46.629698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277530 (9): Bad file descriptor 00:31:32.269 [2024-07-11 11:17:46.629763] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:32.269 [2024-07-11 11:17:46.629794] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:32.269 [2024-07-11 11:17:46.629836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.269 [2024-07-11 11:17:46.629856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.270 [2024-07-11 11:17:46.629870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.270 [2024-07-11 11:17:46.629894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:33.644 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.645 11:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.579 [2024-07-11 11:17:48.885337] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:34.579 [2024-07-11 11:17:48.885378] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:34.579 [2024-07-11 11:17:48.885401] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:34.579 [2024-07-11 11:17:48.971646] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:34.837 [2024-07-11 11:17:49.072794] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:34.837 [2024-07-11 11:17:49.072842] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 request: 00:31:34.837 { 00:31:34.837 "name": "nvme", 00:31:34.837 "trtype": "tcp", 00:31:34.837 "traddr": "10.0.0.2", 00:31:34.837 "adrfam": "ipv4", 00:31:34.837 "trsvcid": "8009", 00:31:34.837 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:34.837 "wait_for_attach": true, 00:31:34.837 "method": "bdev_nvme_start_discovery", 00:31:34.837 "req_id": 1 00:31:34.837 } 00:31:34.837 Got JSON-RPC error response 00:31:34.837 response: 00:31:34.837 { 00:31:34.837 "code": -17, 00:31:34.837 "message": "File exists" 00:31:34.837 } 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 request: 00:31:34.837 { 00:31:34.837 "name": "nvme_second", 00:31:34.837 "trtype": "tcp", 00:31:34.837 "traddr": "10.0.0.2", 00:31:34.837 "adrfam": "ipv4", 00:31:34.837 "trsvcid": "8009", 00:31:34.837 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:34.837 "wait_for_attach": true, 00:31:34.837 "method": "bdev_nvme_start_discovery", 00:31:34.837 "req_id": 1 00:31:34.837 } 00:31:34.837 Got JSON-RPC error response 00:31:34.837 response: 00:31:34.837 { 00:31:34.837 "code": -17, 00:31:34.837 "message": "File exists" 00:31:34.837 } 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:34.837 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.096 11:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.030 [2024-07-11 11:17:50.284376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-07-11 11:17:50.284458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b2ec0 with addr=10.0.0.2, port=8010 00:31:36.030 [2024-07-11 11:17:50.284492] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:36.030 [2024-07-11 11:17:50.284508] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:36.030 [2024-07-11 11:17:50.284522] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:36.965 [2024-07-11 11:17:51.286693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-07-11 11:17:51.286766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b2ec0 with addr=10.0.0.2, port=8010 00:31:36.965 [2024-07-11 11:17:51.286795] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:36.965 [2024-07-11 11:17:51.286810] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:36.965 [2024-07-11 11:17:51.286823] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:37.903 [2024-07-11 11:17:52.288938] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:37.903 request: 00:31:37.903 { 00:31:37.903 "name": "nvme_second", 00:31:37.903 "trtype": "tcp", 00:31:37.903 "traddr": "10.0.0.2", 00:31:37.903 "adrfam": "ipv4", 00:31:37.903 "trsvcid": "8010", 00:31:37.903 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:37.903 "wait_for_attach": false, 00:31:37.903 "attach_timeout_ms": 3000, 00:31:37.903 "method": "bdev_nvme_start_discovery", 00:31:37.903 "req_id": 1 00:31:37.903 } 00:31:37.903 Got JSON-RPC error response 00:31:37.903 response: 00:31:37.903 { 00:31:37.903 "code": -110, 00:31:37.903 "message": "Connection timed out" 00:31:37.903 } 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:37.903 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 369788 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:38.161 rmmod nvme_tcp 00:31:38.161 rmmod nvme_fabrics 00:31:38.161 rmmod nvme_keyring 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 369653 ']' 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 369653 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 369653 ']' 00:31:38.161 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 369653 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 369653 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 369653' 00:31:38.162 killing process with pid 369653 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 369653 00:31:38.162 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 369653 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.419 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:38.420 11:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.332 11:17:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:40.332 00:31:40.332 real 0m13.968s 00:31:40.332 user 0m20.771s 00:31:40.332 sys 0m2.793s 00:31:40.332 11:17:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:40.332 11:17:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.332 ************************************ 00:31:40.332 END TEST nvmf_host_discovery 00:31:40.332 ************************************ 00:31:40.332 11:17:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:40.332 11:17:54 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:40.332 11:17:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:40.332 11:17:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.332 11:17:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.332 ************************************ 00:31:40.332 START TEST nvmf_host_multipath_status 00:31:40.332 ************************************ 00:31:40.332 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:40.590 * Looking for test storage... 00:31:40.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:40.590 11:17:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:42.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:42.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:42.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:42.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.492 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:31:42.493 00:31:42.493 --- 10.0.0.2 ping statistics --- 00:31:42.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.493 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:31:42.493 00:31:42.493 --- 10.0.0.1 ping statistics --- 00:31:42.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.493 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.493 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=372953 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 372953 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 372953 ']' 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:42.753 11:17:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:42.753 [2024-07-11 11:17:56.972402] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:31:42.753 [2024-07-11 11:17:56.972470] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.753 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.753 [2024-07-11 11:17:57.033245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:42.753 [2024-07-11 11:17:57.117256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.753 [2024-07-11 11:17:57.117304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.753 [2024-07-11 11:17:57.117321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.754 [2024-07-11 11:17:57.117332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.754 [2024-07-11 11:17:57.117341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.754 [2024-07-11 11:17:57.117391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.754 [2024-07-11 11:17:57.117395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=372953 00:31:43.012 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:43.270 [2024-07-11 11:17:57.532256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.270 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:43.529 Malloc0 00:31:43.529 11:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:43.787 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:44.045 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.304 [2024-07-11 11:17:58.685458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.304 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:44.562 [2024-07-11 11:17:58.934168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=373160 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 373160 /var/tmp/bdevperf.sock 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 373160 ']' 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:44.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:44.562 11:17:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:45.128 11:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:45.128 11:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:45.128 11:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:45.128 11:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:45.694 Nvme0n1 00:31:45.694 11:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:45.951 Nvme0n1 00:31:45.952 11:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:45.952 11:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:47.852 11:18:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:47.853 11:18:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:48.420 11:18:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.420 11:18:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:49.803 11:18:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:49.803 11:18:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:49.803 11:18:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.803 11:18:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.803 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.803 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:49.803 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.803 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.061 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.061 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.061 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.061 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.318 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.318 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.318 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.318 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.575 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.575 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.575 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.575 11:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.833 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.833 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.833 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.833 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.092 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.092 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:51.092 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.350 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:51.608 11:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:52.543 11:18:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:52.543 11:18:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:52.543 11:18:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.543 11:18:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.801 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.801 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:52.801 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.801 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.059 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.059 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.059 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.059 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.317 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.317 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.317 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.317 11:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.882 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.140 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.140 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:54.140 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.397 11:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:54.654 11:18:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.029 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.288 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.288 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.288 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.288 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.546 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.546 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.546 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.546 11:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.804 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.804 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.804 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.804 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.061 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.061 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:57.062 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.062 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.319 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.319 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:57.319 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:57.577 11:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:57.836 11:18:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:58.772 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:58.772 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:58.772 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.772 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:59.029 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.029 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:59.029 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.029 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.288 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.288 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.288 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.288 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.547 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.547 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.547 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.547 11:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.804 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.804 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.804 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.804 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:00.062 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.062 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:00.062 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.062 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.320 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.320 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:00.320 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:00.578 11:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:00.837 11:18:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:01.774 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:01.774 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:01.774 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.774 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:02.032 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.032 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:02.032 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.032 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.290 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.290 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.290 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.290 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.548 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.548 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.548 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.548 11:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.807 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.807 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:02.807 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.807 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:03.065 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.065 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:03.065 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.065 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.323 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.323 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:03.323 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:03.582 11:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:03.841 11:18:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:04.778 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:04.778 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:04.778 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.778 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:05.036 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.036 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:05.036 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.036 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:05.295 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.295 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:05.295 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.295 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:05.553 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.553 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:05.553 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.553 11:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:05.811 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.811 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:05.811 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.811 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:06.070 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.070 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:06.070 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.070 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:06.329 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.329 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:06.587 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:06.587 11:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:06.846 11:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:07.105 11:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:08.042 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:08.042 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:08.042 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.042 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:08.300 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.300 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:08.300 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.300 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:08.559 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.559 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:08.559 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.559 11:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:08.818 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.818 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:08.818 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.818 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:09.077 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.077 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:09.077 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.077 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:09.334 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.334 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:09.334 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.334 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:09.591 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.591 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:09.591 11:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:09.848 11:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:10.107 11:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:11.039 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:11.039 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:11.039 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.039 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:11.298 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.298 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:11.298 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.298 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:11.556 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.556 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:11.556 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.556 11:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.837 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.837 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.837 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.837 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:12.095 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.095 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:12.095 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.095 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:12.353 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.353 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:12.353 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.353 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:12.611 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.611 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:12.611 11:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:12.869 11:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:13.128 11:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:14.066 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:14.066 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:14.066 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.066 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:14.325 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.325 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:14.325 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.325 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.582 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.582 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.582 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.582 11:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:14.840 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.840 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:14.840 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.840 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:15.098 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.098 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:15.098 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.098 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:15.356 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.356 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:15.356 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.356 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.614 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.614 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:15.614 11:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:15.871 11:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:16.145 11:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:17.157 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:17.157 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:17.157 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.157 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:17.416 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.416 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:17.416 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.416 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.673 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.673 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.673 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.673 11:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.930 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.930 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.930 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.930 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:18.187 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.187 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:18.188 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.188 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:18.445 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.445 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:18.445 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.445 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.702 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:18.702 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 373160 00:32:18.702 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 373160 ']' 00:32:18.702 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 373160 00:32:18.702 11:18:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:18.702 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.702 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 373160 00:32:18.703 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:18.703 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:18.703 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 373160' 00:32:18.703 killing process with pid 373160 00:32:18.703 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 373160 00:32:18.703 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 373160 00:32:18.963 Connection closed with partial response: 00:32:18.963 00:32:18.963 00:32:18.963 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 373160 00:32:18.963 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.963 [2024-07-11 11:17:58.997523] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:32:18.963 [2024-07-11 11:17:58.997623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373160 ] 00:32:18.963 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.963 [2024-07-11 11:17:59.060238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.963 [2024-07-11 11:17:59.147658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.963 Running I/O for 90 seconds... 00:32:18.963 [2024-07-11 11:18:14.830293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.963 [2024-07-11 11:18:14.830465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.963 [2024-07-11 11:18:14.830511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.963 [2024-07-11 11:18:14.830550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.963 [2024-07-11 11:18:14.830604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.963 [2024-07-11 11:18:14.830646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.963 [2024-07-11 11:18:14.830688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.963 [2024-07-11 11:18:14.830705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.830728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.830774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.830807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.830826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.830864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.830881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.830934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.830960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.830983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.964 [2024-07-11 11:18:14.831095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.964 [2024-07-11 11:18:14.831508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.831969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.832975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.832992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.833016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.964 [2024-07-11 11:18:14.833036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.964 [2024-07-11 11:18:14.833077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.833962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.833979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.965 [2024-07-11 11:18:14.834667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.834712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.834792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.834842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.834888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.834939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.834966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.834984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.835011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.835029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.835057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.835075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.835117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.835135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.965 [2024-07-11 11:18:14.835178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.965 [2024-07-11 11:18:14.835205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.966 [2024-07-11 11:18:14.835831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.835966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.835993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:14.836927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.966 [2024-07-11 11:18:14.836944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:30.426599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.966 [2024-07-11 11:18:30.426663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:30.426733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.966 [2024-07-11 11:18:30.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:30.426803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.966 [2024-07-11 11:18:30.426908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.966 [2024-07-11 11:18:30.426937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.966 [2024-07-11 11:18:30.426968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.426993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.427692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.427975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.427991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.429476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.429529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.967 [2024-07-11 11:18:30.429575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.429963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.429986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.967 [2024-07-11 11:18:30.430019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.967 [2024-07-11 11:18:30.430043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.968 [2024-07-11 11:18:30.430432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.968 [2024-07-11 11:18:30.430815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.968 [2024-07-11 11:18:30.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.968 Received shutdown signal, test time was about 32.625364 seconds 00:32:18.968 00:32:18.968 Latency(us) 00:32:18.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.968 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:18.968 Verification LBA range: start 0x0 length 0x4000 00:32:18.968 Nvme0n1 : 32.62 7944.24 31.03 0.00 0.00 16065.13 175.22 4026531.84 00:32:18.968 =================================================================================================================== 00:32:18.968 Total : 7944.24 31.03 0.00 0.00 16065.13 175.22 4026531.84 00:32:18.968 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:19.226 rmmod nvme_tcp 00:32:19.226 rmmod nvme_fabrics 00:32:19.226 rmmod nvme_keyring 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 372953 ']' 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 372953 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 372953 ']' 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 372953 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 372953 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 372953' 00:32:19.226 killing process with pid 372953 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 372953 00:32:19.226 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 372953 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.484 11:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.024 11:18:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:22.024 00:32:22.024 real 0m41.127s 00:32:22.024 user 2m2.422s 00:32:22.024 sys 0m11.289s 00:32:22.024 11:18:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:22.024 11:18:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:22.024 ************************************ 00:32:22.024 END TEST nvmf_host_multipath_status 00:32:22.024 ************************************ 00:32:22.024 11:18:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:22.024 11:18:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:22.024 11:18:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:22.024 11:18:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.024 11:18:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.024 ************************************ 00:32:22.024 START TEST nvmf_discovery_remove_ifc 00:32:22.024 ************************************ 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:22.024 * Looking for test storage... 00:32:22.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.024 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:22.025 11:18:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:23.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:23.925 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.925 11:18:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.925 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:23.926 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:23.926 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:32:23.926 00:32:23.926 --- 10.0.0.2 ping statistics --- 00:32:23.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.926 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:23.926 00:32:23.926 --- 10.0.0.1 ping statistics --- 00:32:23.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.926 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=379933 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 379933 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 379933 ']' 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.926 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.926 [2024-07-11 11:18:38.205641] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:32:23.926 [2024-07-11 11:18:38.205727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.926 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.926 [2024-07-11 11:18:38.277089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.185 [2024-07-11 11:18:38.368790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.185 [2024-07-11 11:18:38.368846] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.185 [2024-07-11 11:18:38.368882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.185 [2024-07-11 11:18:38.368903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.185 [2024-07-11 11:18:38.368918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.185 [2024-07-11 11:18:38.368974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.185 [2024-07-11 11:18:38.520354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.185 [2024-07-11 11:18:38.528515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:24.185 null0 00:32:24.185 [2024-07-11 11:18:38.560488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=380065 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 380065 /tmp/host.sock 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 380065 ']' 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:24.185 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:24.185 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.443 [2024-07-11 11:18:38.623862] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:32:24.443 [2024-07-11 11:18:38.623943] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380065 ] 00:32:24.443 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.443 [2024-07-11 11:18:38.680826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.443 [2024-07-11 11:18:38.764898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.704 11:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.642 [2024-07-11 11:18:40.038963] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:25.642 [2024-07-11 11:18:40.039001] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:25.642 [2024-07-11 11:18:40.039051] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:25.900 [2024-07-11 11:18:40.125323] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:25.900 [2024-07-11 11:18:40.189712] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:25.900 [2024-07-11 11:18:40.189799] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:25.900 [2024-07-11 11:18:40.189836] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:25.900 [2024-07-11 11:18:40.189859] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:25.900 [2024-07-11 11:18:40.189890] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.900 [2024-07-11 11:18:40.196131] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdb7300 was disconnected and freed. delete nvme_qpair. 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.900 11:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:27.275 11:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:28.210 11:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:29.148 11:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:30.081 11:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.454 11:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.454 [2024-07-11 11:18:45.631015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:31.454 [2024-07-11 11:18:45.631094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.454 [2024-07-11 11:18:45.631115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.454 [2024-07-11 11:18:45.631149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.454 [2024-07-11 11:18:45.631161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.454 [2024-07-11 11:18:45.631174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.454 [2024-07-11 11:18:45.631186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.454 [2024-07-11 11:18:45.631199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.454 [2024-07-11 11:18:45.631211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.454 [2024-07-11 11:18:45.631224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:31.454 [2024-07-11 11:18:45.631236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.454 [2024-07-11 11:18:45.631247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7db40 is same with the state(5) to be set 00:32:31.454 [2024-07-11 11:18:45.641050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7db40 (9): Bad file descriptor 00:32:31.454 [2024-07-11 11:18:45.651096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.387 [2024-07-11 11:18:46.688792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:32.387 [2024-07-11 11:18:46.688855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7db40 with addr=10.0.0.2, port=4420 00:32:32.387 [2024-07-11 11:18:46.688878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7db40 is same with the state(5) to be set 00:32:32.387 [2024-07-11 11:18:46.688918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7db40 (9): Bad file descriptor 00:32:32.387 [2024-07-11 11:18:46.689339] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:32.387 [2024-07-11 11:18:46.689368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:32.387 [2024-07-11 11:18:46.689383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:32.387 [2024-07-11 11:18:46.689399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:32.387 [2024-07-11 11:18:46.689426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:32.387 [2024-07-11 11:18:46.689443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:32.387 11:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.321 [2024-07-11 11:18:47.691933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:33.321 [2024-07-11 11:18:47.691959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:33.321 [2024-07-11 11:18:47.691989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:33.321 [2024-07-11 11:18:47.692002] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:33.321 [2024-07-11 11:18:47.692022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:33.321 [2024-07-11 11:18:47.692078] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:33.321 [2024-07-11 11:18:47.692128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.321 [2024-07-11 11:18:47.692148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.321 [2024-07-11 11:18:47.692181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.321 [2024-07-11 11:18:47.692194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.321 [2024-07-11 11:18:47.692213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.321 [2024-07-11 11:18:47.692226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.321 [2024-07-11 11:18:47.692239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.321 [2024-07-11 11:18:47.692250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.321 [2024-07-11 11:18:47.692264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:33.321 [2024-07-11 11:18:47.692276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:33.321 [2024-07-11 11:18:47.692288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:33.321 [2024-07-11 11:18:47.692433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7cf80 (9): Bad file descriptor 00:32:33.321 [2024-07-11 11:18:47.693453] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:33.321 [2024-07-11 11:18:47.693474] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:33.321 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.580 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:33.581 11:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:34.520 11:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.459 [2024-07-11 11:18:49.743439] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:35.459 [2024-07-11 11:18:49.743479] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:35.459 [2024-07-11 11:18:49.743502] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:35.459 [2024-07-11 11:18:49.870928] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.459 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.719 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.719 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:35.719 11:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.719 [2024-07-11 11:18:50.095232] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:35.719 [2024-07-11 11:18:50.095300] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:35.719 [2024-07-11 11:18:50.095330] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:35.719 [2024-07-11 11:18:50.095353] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:35.719 [2024-07-11 11:18:50.095366] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:35.719 [2024-07-11 11:18:50.102609] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd95730 was disconnected and freed. delete nvme_qpair. 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 380065 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 380065 ']' 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 380065 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 380065 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 380065' 00:32:36.655 killing process with pid 380065 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 380065 00:32:36.655 11:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 380065 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:36.915 rmmod nvme_tcp 00:32:36.915 rmmod nvme_fabrics 00:32:36.915 rmmod nvme_keyring 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 379933 ']' 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 379933 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 379933 ']' 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 379933 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 379933 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 379933' 00:32:36.915 killing process with pid 379933 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 379933 00:32:36.915 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 379933 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.174 11:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.707 11:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:39.707 00:32:39.707 real 0m17.643s 00:32:39.707 user 0m25.509s 00:32:39.707 sys 0m3.064s 00:32:39.707 11:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.707 11:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:39.707 ************************************ 00:32:39.707 END TEST nvmf_discovery_remove_ifc 00:32:39.707 ************************************ 00:32:39.707 11:18:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:39.707 11:18:53 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:39.707 11:18:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:39.707 11:18:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.707 11:18:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.707 ************************************ 00:32:39.707 START TEST nvmf_identify_kernel_target 00:32:39.707 ************************************ 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:39.707 * Looking for test storage... 00:32:39.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:39.707 11:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:41.609 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:41.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:41.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:41.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:32:41.609 00:32:41.609 --- 10.0.0.2 ping statistics --- 00:32:41.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.609 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:32:41.609 00:32:41.609 --- 10.0.0.1 ping statistics --- 00:32:41.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.609 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:41.609 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:41.610 11:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:42.987 Waiting for block devices as requested 00:32:42.987 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:42.987 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:42.987 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:42.987 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:43.244 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:43.244 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:43.244 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:43.501 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:43.501 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:43.501 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:43.501 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:43.761 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:43.761 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:43.761 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:43.761 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:44.021 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:44.021 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:44.021 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:44.281 No valid GPT data, bailing 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:44.281 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:44.282 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:44.282 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:44.282 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:44.282 00:32:44.282 Discovery Log Number of Records 2, Generation counter 2 00:32:44.282 =====Discovery Log Entry 0====== 00:32:44.282 trtype: tcp 00:32:44.282 adrfam: ipv4 00:32:44.282 subtype: current discovery subsystem 00:32:44.282 treq: not specified, sq flow control disable supported 00:32:44.282 portid: 1 00:32:44.282 trsvcid: 4420 00:32:44.282 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:44.282 traddr: 10.0.0.1 00:32:44.282 eflags: none 00:32:44.282 sectype: none 00:32:44.282 =====Discovery Log Entry 1====== 00:32:44.282 trtype: tcp 00:32:44.282 adrfam: ipv4 00:32:44.282 subtype: nvme subsystem 00:32:44.282 treq: not specified, sq flow control disable supported 00:32:44.282 portid: 1 00:32:44.282 trsvcid: 4420 00:32:44.282 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:44.282 traddr: 10.0.0.1 00:32:44.282 eflags: none 00:32:44.282 sectype: none 00:32:44.282 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:44.282 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:44.282 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.282 ===================================================== 00:32:44.282 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:44.282 ===================================================== 00:32:44.282 Controller Capabilities/Features 00:32:44.282 ================================ 00:32:44.282 Vendor ID: 0000 00:32:44.282 Subsystem Vendor ID: 0000 00:32:44.282 Serial Number: 3b50d9b59f7f81812814 00:32:44.282 Model Number: Linux 00:32:44.282 Firmware Version: 6.7.0-68 00:32:44.282 Recommended Arb Burst: 0 00:32:44.282 IEEE OUI Identifier: 00 00 00 00:32:44.282 Multi-path I/O 00:32:44.282 May have multiple subsystem ports: No 00:32:44.282 May have multiple controllers: No 00:32:44.282 Associated with SR-IOV VF: No 00:32:44.282 Max Data Transfer Size: Unlimited 00:32:44.282 Max Number of Namespaces: 0 00:32:44.282 Max Number of I/O Queues: 1024 00:32:44.282 NVMe Specification Version (VS): 1.3 00:32:44.282 NVMe Specification Version (Identify): 1.3 00:32:44.282 Maximum Queue Entries: 1024 00:32:44.282 Contiguous Queues Required: No 00:32:44.282 Arbitration Mechanisms Supported 00:32:44.282 Weighted Round Robin: Not Supported 00:32:44.282 Vendor Specific: Not Supported 00:32:44.282 Reset Timeout: 7500 ms 00:32:44.282 Doorbell Stride: 4 bytes 00:32:44.282 NVM Subsystem Reset: Not Supported 00:32:44.282 Command Sets Supported 00:32:44.282 NVM Command Set: Supported 00:32:44.282 Boot Partition: Not Supported 00:32:44.282 Memory Page Size Minimum: 4096 bytes 00:32:44.282 Memory Page Size Maximum: 4096 bytes 00:32:44.282 Persistent Memory Region: Not Supported 00:32:44.282 Optional Asynchronous Events Supported 00:32:44.282 Namespace Attribute Notices: Not Supported 00:32:44.282 Firmware Activation Notices: Not Supported 00:32:44.282 ANA Change Notices: Not Supported 00:32:44.282 PLE Aggregate Log Change Notices: Not Supported 00:32:44.282 LBA Status Info Alert Notices: Not Supported 00:32:44.282 EGE Aggregate Log Change Notices: Not Supported 00:32:44.282 Normal NVM Subsystem Shutdown event: Not Supported 00:32:44.282 Zone Descriptor Change Notices: Not Supported 00:32:44.282 Discovery Log Change Notices: Supported 00:32:44.282 Controller Attributes 00:32:44.282 128-bit Host Identifier: Not Supported 00:32:44.282 Non-Operational Permissive Mode: Not Supported 00:32:44.282 NVM Sets: Not Supported 00:32:44.282 Read Recovery Levels: Not Supported 00:32:44.282 Endurance Groups: Not Supported 00:32:44.282 Predictable Latency Mode: Not Supported 00:32:44.282 Traffic Based Keep ALive: Not Supported 00:32:44.282 Namespace Granularity: Not Supported 00:32:44.282 SQ Associations: Not Supported 00:32:44.282 UUID List: Not Supported 00:32:44.282 Multi-Domain Subsystem: Not Supported 00:32:44.282 Fixed Capacity Management: Not Supported 00:32:44.282 Variable Capacity Management: Not Supported 00:32:44.282 Delete Endurance Group: Not Supported 00:32:44.282 Delete NVM Set: Not Supported 00:32:44.282 Extended LBA Formats Supported: Not Supported 00:32:44.282 Flexible Data Placement Supported: Not Supported 00:32:44.282 00:32:44.282 Controller Memory Buffer Support 00:32:44.282 ================================ 00:32:44.282 Supported: No 00:32:44.282 00:32:44.282 Persistent Memory Region Support 00:32:44.282 ================================ 00:32:44.282 Supported: No 00:32:44.282 00:32:44.282 Admin Command Set Attributes 00:32:44.282 ============================ 00:32:44.282 Security Send/Receive: Not Supported 00:32:44.282 Format NVM: Not Supported 00:32:44.282 Firmware Activate/Download: Not Supported 00:32:44.282 Namespace Management: Not Supported 00:32:44.282 Device Self-Test: Not Supported 00:32:44.282 Directives: Not Supported 00:32:44.282 NVMe-MI: Not Supported 00:32:44.282 Virtualization Management: Not Supported 00:32:44.282 Doorbell Buffer Config: Not Supported 00:32:44.282 Get LBA Status Capability: Not Supported 00:32:44.282 Command & Feature Lockdown Capability: Not Supported 00:32:44.282 Abort Command Limit: 1 00:32:44.282 Async Event Request Limit: 1 00:32:44.282 Number of Firmware Slots: N/A 00:32:44.282 Firmware Slot 1 Read-Only: N/A 00:32:44.282 Firmware Activation Without Reset: N/A 00:32:44.282 Multiple Update Detection Support: N/A 00:32:44.282 Firmware Update Granularity: No Information Provided 00:32:44.282 Per-Namespace SMART Log: No 00:32:44.282 Asymmetric Namespace Access Log Page: Not Supported 00:32:44.282 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:44.282 Command Effects Log Page: Not Supported 00:32:44.282 Get Log Page Extended Data: Supported 00:32:44.282 Telemetry Log Pages: Not Supported 00:32:44.282 Persistent Event Log Pages: Not Supported 00:32:44.282 Supported Log Pages Log Page: May Support 00:32:44.282 Commands Supported & Effects Log Page: Not Supported 00:32:44.282 Feature Identifiers & Effects Log Page:May Support 00:32:44.282 NVMe-MI Commands & Effects Log Page: May Support 00:32:44.282 Data Area 4 for Telemetry Log: Not Supported 00:32:44.282 Error Log Page Entries Supported: 1 00:32:44.282 Keep Alive: Not Supported 00:32:44.282 00:32:44.282 NVM Command Set Attributes 00:32:44.282 ========================== 00:32:44.282 Submission Queue Entry Size 00:32:44.282 Max: 1 00:32:44.282 Min: 1 00:32:44.282 Completion Queue Entry Size 00:32:44.282 Max: 1 00:32:44.282 Min: 1 00:32:44.282 Number of Namespaces: 0 00:32:44.282 Compare Command: Not Supported 00:32:44.282 Write Uncorrectable Command: Not Supported 00:32:44.282 Dataset Management Command: Not Supported 00:32:44.282 Write Zeroes Command: Not Supported 00:32:44.282 Set Features Save Field: Not Supported 00:32:44.282 Reservations: Not Supported 00:32:44.282 Timestamp: Not Supported 00:32:44.282 Copy: Not Supported 00:32:44.282 Volatile Write Cache: Not Present 00:32:44.282 Atomic Write Unit (Normal): 1 00:32:44.282 Atomic Write Unit (PFail): 1 00:32:44.282 Atomic Compare & Write Unit: 1 00:32:44.282 Fused Compare & Write: Not Supported 00:32:44.282 Scatter-Gather List 00:32:44.282 SGL Command Set: Supported 00:32:44.282 SGL Keyed: Not Supported 00:32:44.282 SGL Bit Bucket Descriptor: Not Supported 00:32:44.282 SGL Metadata Pointer: Not Supported 00:32:44.282 Oversized SGL: Not Supported 00:32:44.282 SGL Metadata Address: Not Supported 00:32:44.282 SGL Offset: Supported 00:32:44.282 Transport SGL Data Block: Not Supported 00:32:44.282 Replay Protected Memory Block: Not Supported 00:32:44.282 00:32:44.282 Firmware Slot Information 00:32:44.282 ========================= 00:32:44.282 Active slot: 0 00:32:44.282 00:32:44.282 00:32:44.282 Error Log 00:32:44.282 ========= 00:32:44.282 00:32:44.282 Active Namespaces 00:32:44.282 ================= 00:32:44.282 Discovery Log Page 00:32:44.282 ================== 00:32:44.282 Generation Counter: 2 00:32:44.282 Number of Records: 2 00:32:44.282 Record Format: 0 00:32:44.282 00:32:44.282 Discovery Log Entry 0 00:32:44.282 ---------------------- 00:32:44.282 Transport Type: 3 (TCP) 00:32:44.282 Address Family: 1 (IPv4) 00:32:44.282 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:44.282 Entry Flags: 00:32:44.282 Duplicate Returned Information: 0 00:32:44.282 Explicit Persistent Connection Support for Discovery: 0 00:32:44.282 Transport Requirements: 00:32:44.282 Secure Channel: Not Specified 00:32:44.282 Port ID: 1 (0x0001) 00:32:44.282 Controller ID: 65535 (0xffff) 00:32:44.282 Admin Max SQ Size: 32 00:32:44.282 Transport Service Identifier: 4420 00:32:44.282 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:44.283 Transport Address: 10.0.0.1 00:32:44.283 Discovery Log Entry 1 00:32:44.283 ---------------------- 00:32:44.283 Transport Type: 3 (TCP) 00:32:44.283 Address Family: 1 (IPv4) 00:32:44.283 Subsystem Type: 2 (NVM Subsystem) 00:32:44.283 Entry Flags: 00:32:44.283 Duplicate Returned Information: 0 00:32:44.283 Explicit Persistent Connection Support for Discovery: 0 00:32:44.283 Transport Requirements: 00:32:44.283 Secure Channel: Not Specified 00:32:44.283 Port ID: 1 (0x0001) 00:32:44.283 Controller ID: 65535 (0xffff) 00:32:44.283 Admin Max SQ Size: 32 00:32:44.283 Transport Service Identifier: 4420 00:32:44.283 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:44.283 Transport Address: 10.0.0.1 00:32:44.283 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:44.544 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.544 get_feature(0x01) failed 00:32:44.544 get_feature(0x02) failed 00:32:44.544 get_feature(0x04) failed 00:32:44.544 ===================================================== 00:32:44.544 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:44.544 ===================================================== 00:32:44.544 Controller Capabilities/Features 00:32:44.544 ================================ 00:32:44.544 Vendor ID: 0000 00:32:44.544 Subsystem Vendor ID: 0000 00:32:44.544 Serial Number: f9266e5be51627e33acf 00:32:44.544 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:44.544 Firmware Version: 6.7.0-68 00:32:44.544 Recommended Arb Burst: 6 00:32:44.544 IEEE OUI Identifier: 00 00 00 00:32:44.544 Multi-path I/O 00:32:44.544 May have multiple subsystem ports: Yes 00:32:44.544 May have multiple controllers: Yes 00:32:44.544 Associated with SR-IOV VF: No 00:32:44.544 Max Data Transfer Size: Unlimited 00:32:44.544 Max Number of Namespaces: 1024 00:32:44.544 Max Number of I/O Queues: 128 00:32:44.544 NVMe Specification Version (VS): 1.3 00:32:44.544 NVMe Specification Version (Identify): 1.3 00:32:44.544 Maximum Queue Entries: 1024 00:32:44.544 Contiguous Queues Required: No 00:32:44.544 Arbitration Mechanisms Supported 00:32:44.544 Weighted Round Robin: Not Supported 00:32:44.544 Vendor Specific: Not Supported 00:32:44.544 Reset Timeout: 7500 ms 00:32:44.544 Doorbell Stride: 4 bytes 00:32:44.544 NVM Subsystem Reset: Not Supported 00:32:44.544 Command Sets Supported 00:32:44.544 NVM Command Set: Supported 00:32:44.544 Boot Partition: Not Supported 00:32:44.544 Memory Page Size Minimum: 4096 bytes 00:32:44.544 Memory Page Size Maximum: 4096 bytes 00:32:44.544 Persistent Memory Region: Not Supported 00:32:44.544 Optional Asynchronous Events Supported 00:32:44.544 Namespace Attribute Notices: Supported 00:32:44.544 Firmware Activation Notices: Not Supported 00:32:44.544 ANA Change Notices: Supported 00:32:44.544 PLE Aggregate Log Change Notices: Not Supported 00:32:44.544 LBA Status Info Alert Notices: Not Supported 00:32:44.544 EGE Aggregate Log Change Notices: Not Supported 00:32:44.544 Normal NVM Subsystem Shutdown event: Not Supported 00:32:44.544 Zone Descriptor Change Notices: Not Supported 00:32:44.544 Discovery Log Change Notices: Not Supported 00:32:44.544 Controller Attributes 00:32:44.544 128-bit Host Identifier: Supported 00:32:44.544 Non-Operational Permissive Mode: Not Supported 00:32:44.544 NVM Sets: Not Supported 00:32:44.544 Read Recovery Levels: Not Supported 00:32:44.544 Endurance Groups: Not Supported 00:32:44.544 Predictable Latency Mode: Not Supported 00:32:44.544 Traffic Based Keep ALive: Supported 00:32:44.544 Namespace Granularity: Not Supported 00:32:44.544 SQ Associations: Not Supported 00:32:44.544 UUID List: Not Supported 00:32:44.544 Multi-Domain Subsystem: Not Supported 00:32:44.544 Fixed Capacity Management: Not Supported 00:32:44.544 Variable Capacity Management: Not Supported 00:32:44.544 Delete Endurance Group: Not Supported 00:32:44.544 Delete NVM Set: Not Supported 00:32:44.544 Extended LBA Formats Supported: Not Supported 00:32:44.544 Flexible Data Placement Supported: Not Supported 00:32:44.544 00:32:44.544 Controller Memory Buffer Support 00:32:44.544 ================================ 00:32:44.544 Supported: No 00:32:44.544 00:32:44.544 Persistent Memory Region Support 00:32:44.544 ================================ 00:32:44.544 Supported: No 00:32:44.544 00:32:44.544 Admin Command Set Attributes 00:32:44.544 ============================ 00:32:44.544 Security Send/Receive: Not Supported 00:32:44.544 Format NVM: Not Supported 00:32:44.544 Firmware Activate/Download: Not Supported 00:32:44.544 Namespace Management: Not Supported 00:32:44.544 Device Self-Test: Not Supported 00:32:44.544 Directives: Not Supported 00:32:44.544 NVMe-MI: Not Supported 00:32:44.544 Virtualization Management: Not Supported 00:32:44.544 Doorbell Buffer Config: Not Supported 00:32:44.544 Get LBA Status Capability: Not Supported 00:32:44.544 Command & Feature Lockdown Capability: Not Supported 00:32:44.544 Abort Command Limit: 4 00:32:44.544 Async Event Request Limit: 4 00:32:44.544 Number of Firmware Slots: N/A 00:32:44.544 Firmware Slot 1 Read-Only: N/A 00:32:44.544 Firmware Activation Without Reset: N/A 00:32:44.544 Multiple Update Detection Support: N/A 00:32:44.544 Firmware Update Granularity: No Information Provided 00:32:44.544 Per-Namespace SMART Log: Yes 00:32:44.544 Asymmetric Namespace Access Log Page: Supported 00:32:44.544 ANA Transition Time : 10 sec 00:32:44.544 00:32:44.544 Asymmetric Namespace Access Capabilities 00:32:44.544 ANA Optimized State : Supported 00:32:44.544 ANA Non-Optimized State : Supported 00:32:44.544 ANA Inaccessible State : Supported 00:32:44.544 ANA Persistent Loss State : Supported 00:32:44.544 ANA Change State : Supported 00:32:44.544 ANAGRPID is not changed : No 00:32:44.544 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:44.544 00:32:44.544 ANA Group Identifier Maximum : 128 00:32:44.545 Number of ANA Group Identifiers : 128 00:32:44.545 Max Number of Allowed Namespaces : 1024 00:32:44.545 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:44.545 Command Effects Log Page: Supported 00:32:44.545 Get Log Page Extended Data: Supported 00:32:44.545 Telemetry Log Pages: Not Supported 00:32:44.545 Persistent Event Log Pages: Not Supported 00:32:44.545 Supported Log Pages Log Page: May Support 00:32:44.545 Commands Supported & Effects Log Page: Not Supported 00:32:44.545 Feature Identifiers & Effects Log Page:May Support 00:32:44.545 NVMe-MI Commands & Effects Log Page: May Support 00:32:44.545 Data Area 4 for Telemetry Log: Not Supported 00:32:44.545 Error Log Page Entries Supported: 128 00:32:44.545 Keep Alive: Supported 00:32:44.545 Keep Alive Granularity: 1000 ms 00:32:44.545 00:32:44.545 NVM Command Set Attributes 00:32:44.545 ========================== 00:32:44.545 Submission Queue Entry Size 00:32:44.545 Max: 64 00:32:44.545 Min: 64 00:32:44.545 Completion Queue Entry Size 00:32:44.545 Max: 16 00:32:44.545 Min: 16 00:32:44.545 Number of Namespaces: 1024 00:32:44.545 Compare Command: Not Supported 00:32:44.545 Write Uncorrectable Command: Not Supported 00:32:44.545 Dataset Management Command: Supported 00:32:44.545 Write Zeroes Command: Supported 00:32:44.545 Set Features Save Field: Not Supported 00:32:44.545 Reservations: Not Supported 00:32:44.545 Timestamp: Not Supported 00:32:44.545 Copy: Not Supported 00:32:44.545 Volatile Write Cache: Present 00:32:44.545 Atomic Write Unit (Normal): 1 00:32:44.545 Atomic Write Unit (PFail): 1 00:32:44.545 Atomic Compare & Write Unit: 1 00:32:44.545 Fused Compare & Write: Not Supported 00:32:44.545 Scatter-Gather List 00:32:44.545 SGL Command Set: Supported 00:32:44.545 SGL Keyed: Not Supported 00:32:44.545 SGL Bit Bucket Descriptor: Not Supported 00:32:44.545 SGL Metadata Pointer: Not Supported 00:32:44.545 Oversized SGL: Not Supported 00:32:44.545 SGL Metadata Address: Not Supported 00:32:44.545 SGL Offset: Supported 00:32:44.545 Transport SGL Data Block: Not Supported 00:32:44.545 Replay Protected Memory Block: Not Supported 00:32:44.545 00:32:44.545 Firmware Slot Information 00:32:44.545 ========================= 00:32:44.545 Active slot: 0 00:32:44.545 00:32:44.545 Asymmetric Namespace Access 00:32:44.545 =========================== 00:32:44.545 Change Count : 0 00:32:44.545 Number of ANA Group Descriptors : 1 00:32:44.545 ANA Group Descriptor : 0 00:32:44.545 ANA Group ID : 1 00:32:44.545 Number of NSID Values : 1 00:32:44.545 Change Count : 0 00:32:44.545 ANA State : 1 00:32:44.545 Namespace Identifier : 1 00:32:44.545 00:32:44.545 Commands Supported and Effects 00:32:44.545 ============================== 00:32:44.545 Admin Commands 00:32:44.545 -------------- 00:32:44.545 Get Log Page (02h): Supported 00:32:44.545 Identify (06h): Supported 00:32:44.545 Abort (08h): Supported 00:32:44.545 Set Features (09h): Supported 00:32:44.545 Get Features (0Ah): Supported 00:32:44.545 Asynchronous Event Request (0Ch): Supported 00:32:44.545 Keep Alive (18h): Supported 00:32:44.545 I/O Commands 00:32:44.545 ------------ 00:32:44.545 Flush (00h): Supported 00:32:44.545 Write (01h): Supported LBA-Change 00:32:44.545 Read (02h): Supported 00:32:44.545 Write Zeroes (08h): Supported LBA-Change 00:32:44.545 Dataset Management (09h): Supported 00:32:44.545 00:32:44.545 Error Log 00:32:44.545 ========= 00:32:44.545 Entry: 0 00:32:44.545 Error Count: 0x3 00:32:44.545 Submission Queue Id: 0x0 00:32:44.545 Command Id: 0x5 00:32:44.545 Phase Bit: 0 00:32:44.545 Status Code: 0x2 00:32:44.545 Status Code Type: 0x0 00:32:44.545 Do Not Retry: 1 00:32:44.545 Error Location: 0x28 00:32:44.545 LBA: 0x0 00:32:44.545 Namespace: 0x0 00:32:44.545 Vendor Log Page: 0x0 00:32:44.545 ----------- 00:32:44.545 Entry: 1 00:32:44.545 Error Count: 0x2 00:32:44.545 Submission Queue Id: 0x0 00:32:44.545 Command Id: 0x5 00:32:44.545 Phase Bit: 0 00:32:44.545 Status Code: 0x2 00:32:44.545 Status Code Type: 0x0 00:32:44.545 Do Not Retry: 1 00:32:44.545 Error Location: 0x28 00:32:44.545 LBA: 0x0 00:32:44.545 Namespace: 0x0 00:32:44.545 Vendor Log Page: 0x0 00:32:44.545 ----------- 00:32:44.545 Entry: 2 00:32:44.545 Error Count: 0x1 00:32:44.545 Submission Queue Id: 0x0 00:32:44.545 Command Id: 0x4 00:32:44.545 Phase Bit: 0 00:32:44.545 Status Code: 0x2 00:32:44.545 Status Code Type: 0x0 00:32:44.545 Do Not Retry: 1 00:32:44.545 Error Location: 0x28 00:32:44.545 LBA: 0x0 00:32:44.545 Namespace: 0x0 00:32:44.545 Vendor Log Page: 0x0 00:32:44.545 00:32:44.545 Number of Queues 00:32:44.545 ================ 00:32:44.545 Number of I/O Submission Queues: 128 00:32:44.545 Number of I/O Completion Queues: 128 00:32:44.545 00:32:44.545 ZNS Specific Controller Data 00:32:44.545 ============================ 00:32:44.545 Zone Append Size Limit: 0 00:32:44.545 00:32:44.545 00:32:44.545 Active Namespaces 00:32:44.545 ================= 00:32:44.545 get_feature(0x05) failed 00:32:44.545 Namespace ID:1 00:32:44.545 Command Set Identifier: NVM (00h) 00:32:44.545 Deallocate: Supported 00:32:44.545 Deallocated/Unwritten Error: Not Supported 00:32:44.545 Deallocated Read Value: Unknown 00:32:44.545 Deallocate in Write Zeroes: Not Supported 00:32:44.545 Deallocated Guard Field: 0xFFFF 00:32:44.545 Flush: Supported 00:32:44.545 Reservation: Not Supported 00:32:44.545 Namespace Sharing Capabilities: Multiple Controllers 00:32:44.545 Size (in LBAs): 1953525168 (931GiB) 00:32:44.545 Capacity (in LBAs): 1953525168 (931GiB) 00:32:44.545 Utilization (in LBAs): 1953525168 (931GiB) 00:32:44.545 UUID: 3f40b377-1ae7-4b44-a8ba-1fd80a478684 00:32:44.545 Thin Provisioning: Not Supported 00:32:44.545 Per-NS Atomic Units: Yes 00:32:44.545 Atomic Boundary Size (Normal): 0 00:32:44.545 Atomic Boundary Size (PFail): 0 00:32:44.545 Atomic Boundary Offset: 0 00:32:44.545 NGUID/EUI64 Never Reused: No 00:32:44.545 ANA group ID: 1 00:32:44.545 Namespace Write Protected: No 00:32:44.545 Number of LBA Formats: 1 00:32:44.545 Current LBA Format: LBA Format #00 00:32:44.545 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:44.545 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:44.545 rmmod nvme_tcp 00:32:44.545 rmmod nvme_fabrics 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:44.545 11:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.454 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:46.712 11:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:47.646 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:47.646 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:47.906 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:48.846 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:48.846 00:32:48.846 real 0m9.578s 00:32:48.846 user 0m2.037s 00:32:48.846 sys 0m3.493s 00:32:48.846 11:19:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.846 11:19:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.846 ************************************ 00:32:48.846 END TEST nvmf_identify_kernel_target 00:32:48.846 ************************************ 00:32:48.846 11:19:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:48.846 11:19:03 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:48.846 11:19:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:48.846 11:19:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.846 11:19:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.846 ************************************ 00:32:48.846 START TEST nvmf_auth_host 00:32:48.846 ************************************ 00:32:48.846 11:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:48.846 * Looking for test storage... 00:32:49.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:49.105 11:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:51.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:51.005 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:51.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:51.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:51.005 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:51.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:32:51.263 00:32:51.263 --- 10.0.0.2 ping statistics --- 00:32:51.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.263 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:32:51.263 00:32:51.263 --- 10.0.0.1 ping statistics --- 00:32:51.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.263 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:51.263 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=387144 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 387144 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 387144 ']' 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.264 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bbe8b1476c455549e22c9fba997aefe4 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oAS 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bbe8b1476c455549e22c9fba997aefe4 0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bbe8b1476c455549e22c9fba997aefe4 0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bbe8b1476c455549e22c9fba997aefe4 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oAS 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oAS 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oAS 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8adfb3ff63292ff8efcc446223d512be1d68ff1a27e831a5dc16de4230bf0dd5 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Kgg 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8adfb3ff63292ff8efcc446223d512be1d68ff1a27e831a5dc16de4230bf0dd5 3 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8adfb3ff63292ff8efcc446223d512be1d68ff1a27e831a5dc16de4230bf0dd5 3 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8adfb3ff63292ff8efcc446223d512be1d68ff1a27e831a5dc16de4230bf0dd5 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Kgg 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Kgg 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Kgg 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d2f2aa048dc1ac2983390a9a9fbd0729ef40387f38461875 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aAV 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d2f2aa048dc1ac2983390a9a9fbd0729ef40387f38461875 0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d2f2aa048dc1ac2983390a9a9fbd0729ef40387f38461875 0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d2f2aa048dc1ac2983390a9a9fbd0729ef40387f38461875 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aAV 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aAV 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aAV 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bf4fcc07dc2e6047f6f6f4ab2b622a38c116b173df001784 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ncM 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bf4fcc07dc2e6047f6f6f4ab2b622a38c116b173df001784 2 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bf4fcc07dc2e6047f6f6f4ab2b622a38c116b173df001784 2 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bf4fcc07dc2e6047f6f6f4ab2b622a38c116b173df001784 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:51.521 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ncM 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ncM 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ncM 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=38da72c84a3009938f6bfd5b559c131a 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gc7 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 38da72c84a3009938f6bfd5b559c131a 1 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 38da72c84a3009938f6bfd5b559c131a 1 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=38da72c84a3009938f6bfd5b559c131a 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:51.779 11:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gc7 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gc7 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.gc7 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d3c428f69af1135fa29ef4717231209 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T0F 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d3c428f69af1135fa29ef4717231209 1 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d3c428f69af1135fa29ef4717231209 1 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d3c428f69af1135fa29ef4717231209 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T0F 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T0F 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.T0F 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc5c54ba605287072959ba0efd74433e65e61c8471c0ae5d 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5FH 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc5c54ba605287072959ba0efd74433e65e61c8471c0ae5d 2 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc5c54ba605287072959ba0efd74433e65e61c8471c0ae5d 2 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc5c54ba605287072959ba0efd74433e65e61c8471c0ae5d 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5FH 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5FH 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5FH 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:51.779 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=602bc938dfcaf3800d8bc2343ca93aeb 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ixs 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 602bc938dfcaf3800d8bc2343ca93aeb 0 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 602bc938dfcaf3800d8bc2343ca93aeb 0 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=602bc938dfcaf3800d8bc2343ca93aeb 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ixs 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ixs 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ixs 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e7a5cb9f0d7551e9c400887a08276685e37e0ec1ade8f4dd8da66a1717893f0 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.na3 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e7a5cb9f0d7551e9c400887a08276685e37e0ec1ade8f4dd8da66a1717893f0 3 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e7a5cb9f0d7551e9c400887a08276685e37e0ec1ade8f4dd8da66a1717893f0 3 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e7a5cb9f0d7551e9c400887a08276685e37e0ec1ade8f4dd8da66a1717893f0 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:51.780 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.na3 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.na3 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.na3 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 387144 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 387144 ']' 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oAS 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.037 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Kgg ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kgg 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aAV 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ncM ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ncM 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.gc7 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.T0F ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T0F 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5FH 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ixs ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ixs 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.na3 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:52.296 11:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:53.232 Waiting for block devices as requested 00:32:53.232 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:53.492 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:53.492 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:53.751 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:53.751 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:53.751 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:53.751 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:53.751 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:54.011 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:54.011 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:54.011 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:54.270 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:54.270 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:54.270 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:54.270 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:54.530 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:54.530 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:54.788 11:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:55.047 No valid GPT data, bailing 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:55.047 00:32:55.047 Discovery Log Number of Records 2, Generation counter 2 00:32:55.047 =====Discovery Log Entry 0====== 00:32:55.047 trtype: tcp 00:32:55.047 adrfam: ipv4 00:32:55.047 subtype: current discovery subsystem 00:32:55.047 treq: not specified, sq flow control disable supported 00:32:55.047 portid: 1 00:32:55.047 trsvcid: 4420 00:32:55.047 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:55.047 traddr: 10.0.0.1 00:32:55.047 eflags: none 00:32:55.047 sectype: none 00:32:55.047 =====Discovery Log Entry 1====== 00:32:55.047 trtype: tcp 00:32:55.047 adrfam: ipv4 00:32:55.047 subtype: nvme subsystem 00:32:55.047 treq: not specified, sq flow control disable supported 00:32:55.047 portid: 1 00:32:55.047 trsvcid: 4420 00:32:55.047 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:55.047 traddr: 10.0.0.1 00:32:55.047 eflags: none 00:32:55.047 sectype: none 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.047 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.048 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.307 nvme0n1 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.307 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.566 nvme0n1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.566 11:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.824 nvme0n1 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.824 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.825 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.082 nvme0n1 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.082 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.083 nvme0n1 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.083 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.341 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.342 nvme0n1 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.342 11:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.600 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:32:56.600 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:32:56.600 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:32:56.600 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.601 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.861 nvme0n1 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.861 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.862 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.122 nvme0n1 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.122 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.123 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.383 nvme0n1 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.383 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.643 nvme0n1 00:32:57.643 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.643 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.643 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.643 11:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.643 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.643 11:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.643 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 nvme0n1 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.902 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.470 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:32:58.470 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:32:58.470 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:32:58.470 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:58.470 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.471 11:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.729 nvme0n1 00:32:58.729 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.729 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.729 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.729 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.729 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.729 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.986 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.987 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.249 nvme0n1 00:32:59.249 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.250 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.510 nvme0n1 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.510 11:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 nvme0n1 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.769 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.027 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.287 nvme0n1 00:33:00.287 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.287 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.288 11:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.191 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.449 nvme0n1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.449 11:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.015 nvme0n1 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.015 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.016 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.016 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.016 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.584 nvme0n1 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.584 11:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.153 nvme0n1 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.153 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.722 nvme0n1 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.722 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.723 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.723 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.723 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.723 11:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.723 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.663 nvme0n1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.663 11:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.597 nvme0n1 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.597 11:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.533 nvme0n1 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.533 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.534 11:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.103 nvme0n1 00:33:08.103 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.103 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.104 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.104 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.104 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.364 11:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.309 nvme0n1 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.309 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.310 nvme0n1 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.310 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.571 nvme0n1 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:09.571 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.572 11:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.831 nvme0n1 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.831 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.090 nvme0n1 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.090 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.091 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.350 nvme0n1 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.350 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.610 nvme0n1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.610 11:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.869 nvme0n1 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.869 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.128 nvme0n1 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:11.128 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.129 nvme0n1 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.129 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.387 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.388 nvme0n1 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.388 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.648 11:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.909 nvme0n1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.909 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.169 nvme0n1 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.169 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.170 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 nvme0n1 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.688 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.688 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.688 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.688 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.689 11:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 nvme0n1 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.949 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.210 nvme0n1 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.210 11:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.782 nvme0n1 00:33:13.782 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.782 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.782 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.782 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.782 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.783 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.352 nvme0n1 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.352 11:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.921 nvme0n1 00:33:14.921 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.921 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.921 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.921 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.922 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.489 nvme0n1 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.489 11:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.059 nvme0n1 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.059 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.060 11:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.001 nvme0n1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.002 11:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.942 nvme0n1 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.942 11:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 nvme0n1 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.881 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.812 nvme0n1 00:33:19.812 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.812 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.812 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.812 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.812 11:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.812 11:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.812 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.813 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.748 nvme0n1 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.748 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.749 11:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.749 nvme0n1 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.749 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.039 nvme0n1 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.039 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.040 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.299 nvme0n1 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.299 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.300 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.559 nvme0n1 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.559 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.560 11:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.818 nvme0n1 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.818 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.076 nvme0n1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.076 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.336 nvme0n1 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.336 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.597 nvme0n1 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.597 11:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 nvme0n1 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.858 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.858 nvme0n1 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.116 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.117 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.383 nvme0n1 00:33:23.383 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.384 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.656 nvme0n1 00:33:23.656 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.656 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.656 11:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.656 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.656 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.656 11:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:23.656 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.657 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.940 nvme0n1 00:33:23.940 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.940 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.940 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.940 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.940 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.940 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.215 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.490 nvme0n1 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.490 11:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.759 nvme0n1 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.759 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.329 nvme0n1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.329 11:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.897 nvme0n1 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.897 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.464 nvme0n1 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.464 11:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.034 nvme0n1 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.034 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.604 nvme0n1 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmJlOGIxNDc2YzQ1NTU0OWUyMmM5ZmJhOTk3YWVmZTQI4G/I: 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: ]] 00:33:27.604 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGFkZmIzZmY2MzI5MmZmOGVmY2M0NDYyMjNkNTEyYmUxZDY4ZmYxYTI3ZTgzMWE1ZGMxNmRlNDIzMGJmMGRkNWt3XJY=: 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.605 11:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.544 nvme0n1 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.544 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.545 11:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.479 nvme0n1 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.479 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzhkYTcyYzg0YTMwMDk5MzhmNmJmZDViNTU5YzEzMWFONonV: 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: ]] 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmQzYzQyOGY2OWFmMTEzNWZhMjllZjQ3MTcyMzEyMDnYPunM: 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.480 11:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.414 nvme0n1 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM1YzU0YmE2MDUyODcwNzI5NTliYTBlZmQ3NDQzM2U2NWU2MWM4NDcxYzBhZTVkzy4SNg==: 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjAyYmM5MzhkZmNhZjM4MDBkOGJjMjM0M2NhOTNhZWKkyUt3: 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.414 11:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.352 nvme0n1 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmU3YTVjYjlmMGQ3NTUxZTljNDAwODg3YTA4Mjc2Njg1ZTM3ZTBlYzFhZGU4ZjRkZDhkYTY2YTE3MTc4OTNmMKi6ybU=: 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:31.352 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.353 11:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.920 nvme0n1 00:33:31.920 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.920 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.920 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.920 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.920 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.920 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDJmMmFhMDQ4ZGMxYWMyOTgzMzkwYTlhOWZiZDA3MjllZjQwMzg3ZjM4NDYxODc1KE5nNg==: 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmY0ZmNjMDdkYzJlNjA0N2Y2ZjZmNGFiMmI2MjJhMzhjMTE2YjE3M2RmMDAxNzg0yC7tAA==: 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.179 request: 00:33:32.179 { 00:33:32.179 "name": "nvme0", 00:33:32.179 "trtype": "tcp", 00:33:32.179 "traddr": "10.0.0.1", 00:33:32.179 "adrfam": "ipv4", 00:33:32.179 "trsvcid": "4420", 00:33:32.179 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:32.179 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:32.179 "prchk_reftag": false, 00:33:32.179 "prchk_guard": false, 00:33:32.179 "hdgst": false, 00:33:32.179 "ddgst": false, 00:33:32.179 "method": "bdev_nvme_attach_controller", 00:33:32.179 "req_id": 1 00:33:32.179 } 00:33:32.179 Got JSON-RPC error response 00:33:32.179 response: 00:33:32.179 { 00:33:32.179 "code": -5, 00:33:32.179 "message": "Input/output error" 00:33:32.179 } 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:32.179 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.180 request: 00:33:32.180 { 00:33:32.180 "name": "nvme0", 00:33:32.180 "trtype": "tcp", 00:33:32.180 "traddr": "10.0.0.1", 00:33:32.180 "adrfam": "ipv4", 00:33:32.180 "trsvcid": "4420", 00:33:32.180 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:32.180 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:32.180 "prchk_reftag": false, 00:33:32.180 "prchk_guard": false, 00:33:32.180 "hdgst": false, 00:33:32.180 "ddgst": false, 00:33:32.180 "dhchap_key": "key2", 00:33:32.180 "method": "bdev_nvme_attach_controller", 00:33:32.180 "req_id": 1 00:33:32.180 } 00:33:32.180 Got JSON-RPC error response 00:33:32.180 response: 00:33:32.180 { 00:33:32.180 "code": -5, 00:33:32.180 "message": "Input/output error" 00:33:32.180 } 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:32.180 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.461 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.462 request: 00:33:32.462 { 00:33:32.462 "name": "nvme0", 00:33:32.462 "trtype": "tcp", 00:33:32.462 "traddr": "10.0.0.1", 00:33:32.462 "adrfam": "ipv4", 00:33:32.462 "trsvcid": "4420", 00:33:32.462 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:32.462 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:32.462 "prchk_reftag": false, 00:33:32.462 "prchk_guard": false, 00:33:32.462 "hdgst": false, 00:33:32.462 "ddgst": false, 00:33:32.462 "dhchap_key": "key1", 00:33:32.462 "dhchap_ctrlr_key": "ckey2", 00:33:32.462 "method": "bdev_nvme_attach_controller", 00:33:32.462 "req_id": 1 00:33:32.462 } 00:33:32.462 Got JSON-RPC error response 00:33:32.462 response: 00:33:32.462 { 00:33:32.462 "code": -5, 00:33:32.462 "message": "Input/output error" 00:33:32.462 } 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:32.462 rmmod nvme_tcp 00:33:32.462 rmmod nvme_fabrics 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 387144 ']' 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 387144 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 387144 ']' 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 387144 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 387144 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 387144' 00:33:32.462 killing process with pid 387144 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 387144 00:33:32.462 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 387144 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:32.721 11:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.625 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:34.625 11:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:34.625 11:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:34.625 11:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:34.625 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:34.626 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:34.883 11:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:36.259 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.259 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:36.259 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:36.259 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.260 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.260 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.260 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.260 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.260 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.260 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.193 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:37.194 11:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oAS /tmp/spdk.key-null.aAV /tmp/spdk.key-sha256.gc7 /tmp/spdk.key-sha384.5FH /tmp/spdk.key-sha512.na3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:37.194 11:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:38.569 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:38.569 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:38.569 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:38.569 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:38.569 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:38.569 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:38.569 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:38.569 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:38.569 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:38.569 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:38.569 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:38.569 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:38.569 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:38.569 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:38.569 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:38.569 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:38.569 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:38.569 00:33:38.569 real 0m49.563s 00:33:38.570 user 0m46.535s 00:33:38.570 sys 0m5.782s 00:33:38.570 11:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:38.570 11:19:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.570 ************************************ 00:33:38.570 END TEST nvmf_auth_host 00:33:38.570 ************************************ 00:33:38.570 11:19:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:38.570 11:19:52 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:38.570 11:19:52 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:38.570 11:19:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:38.570 11:19:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:38.570 11:19:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.570 ************************************ 00:33:38.570 START TEST nvmf_digest 00:33:38.570 ************************************ 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:38.570 * Looking for test storage... 00:33:38.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:38.570 11:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:40.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:40.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:40.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:40.472 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:40.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.473 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.732 11:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:40.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:33:40.732 00:33:40.732 --- 10.0.0.2 ping statistics --- 00:33:40.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.732 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:33:40.732 00:33:40.732 --- 10.0.0.1 ping statistics --- 00:33:40.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.732 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.732 ************************************ 00:33:40.732 START TEST nvmf_digest_clean 00:33:40.732 ************************************ 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=396596 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 396596 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 396596 ']' 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.732 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.732 [2024-07-11 11:19:55.108173] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:40.732 [2024-07-11 11:19:55.108249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.732 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.990 [2024-07-11 11:19:55.196080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.990 [2024-07-11 11:19:55.299805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.990 [2024-07-11 11:19:55.299877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.990 [2024-07-11 11:19:55.299904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.990 [2024-07-11 11:19:55.299928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.990 [2024-07-11 11:19:55.299957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.991 [2024-07-11 11:19:55.300001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.991 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:40.991 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:40.991 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.991 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:40.991 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.249 null0 00:33:41.249 [2024-07-11 11:19:55.535290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.249 [2024-07-11 11:19:55.559486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=396616 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 396616 /var/tmp/bperf.sock 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 396616 ']' 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.249 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.249 [2024-07-11 11:19:55.608065] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:41.249 [2024-07-11 11:19:55.608137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396616 ] 00:33:41.249 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.249 [2024-07-11 11:19:55.665641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.507 [2024-07-11 11:19:55.751759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.507 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:41.507 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:41.507 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:41.507 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:41.507 11:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:41.765 11:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.765 11:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.334 nvme0n1 00:33:42.334 11:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:42.334 11:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.334 Running I/O for 2 seconds... 00:33:44.866 00:33:44.866 Latency(us) 00:33:44.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.866 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:44.866 nvme0n1 : 2.05 19530.50 76.29 0.00 0.00 6420.01 3373.89 44661.57 00:33:44.866 =================================================================================================================== 00:33:44.866 Total : 19530.50 76.29 0.00 0.00 6420.01 3373.89 44661.57 00:33:44.866 0 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:44.866 | select(.opcode=="crc32c") 00:33:44.866 | "\(.module_name) \(.executed)"' 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 396616 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 396616 ']' 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 396616 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 396616 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 396616' 00:33:44.866 killing process with pid 396616 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 396616 00:33:44.866 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.866 00:33:44.866 Latency(us) 00:33:44.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.866 =================================================================================================================== 00:33:44.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.866 11:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 396616 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=397028 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 397028 /var/tmp/bperf.sock 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 397028 ']' 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:44.866 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:44.866 [2024-07-11 11:19:59.227403] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:44.866 [2024-07-11 11:19:59.227505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397028 ] 00:33:44.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.866 Zero copy mechanism will not be used. 00:33:44.866 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.866 [2024-07-11 11:19:59.285225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.125 [2024-07-11 11:19:59.372357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.125 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:45.125 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:45.125 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:45.125 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:45.125 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:45.383 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.383 11:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.950 nvme0n1 00:33:45.950 11:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:45.950 11:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.950 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.950 Zero copy mechanism will not be used. 00:33:45.950 Running I/O for 2 seconds... 00:33:48.483 00:33:48.483 Latency(us) 00:33:48.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.483 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:48.483 nvme0n1 : 2.00 5802.74 725.34 0.00 0.00 2753.14 746.38 4951.61 00:33:48.483 =================================================================================================================== 00:33:48.483 Total : 5802.74 725.34 0.00 0.00 2753.14 746.38 4951.61 00:33:48.483 0 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:48.483 | select(.opcode=="crc32c") 00:33:48.483 | "\(.module_name) \(.executed)"' 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:48.483 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 397028 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 397028 ']' 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 397028 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397028 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397028' 00:33:48.484 killing process with pid 397028 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 397028 00:33:48.484 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.484 00:33:48.484 Latency(us) 00:33:48.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.484 =================================================================================================================== 00:33:48.484 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 397028 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=397494 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 397494 /var/tmp/bperf.sock 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 397494 ']' 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.484 11:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:48.484 [2024-07-11 11:20:02.842333] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:48.484 [2024-07-11 11:20:02.842431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397494 ] 00:33:48.484 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.484 [2024-07-11 11:20:02.900854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.743 [2024-07-11 11:20:02.985672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.743 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.743 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:48.743 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:48.743 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:48.743 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:49.001 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.001 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.566 nvme0n1 00:33:49.566 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:49.566 11:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.566 Running I/O for 2 seconds... 00:33:52.102 00:33:52.102 Latency(us) 00:33:52.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.102 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:52.102 nvme0n1 : 2.01 19115.51 74.67 0.00 0.00 6680.13 2754.94 9272.13 00:33:52.102 =================================================================================================================== 00:33:52.102 Total : 19115.51 74.67 0.00 0.00 6680.13 2754.94 9272.13 00:33:52.102 0 00:33:52.102 11:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:52.102 11:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:52.102 11:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:52.102 11:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:52.102 11:20:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:52.102 | select(.opcode=="crc32c") 00:33:52.102 | "\(.module_name) \(.executed)"' 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 397494 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 397494 ']' 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 397494 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397494 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397494' 00:33:52.102 killing process with pid 397494 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 397494 00:33:52.102 Received shutdown signal, test time was about 2.000000 seconds 00:33:52.102 00:33:52.102 Latency(us) 00:33:52.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.102 =================================================================================================================== 00:33:52.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 397494 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=397957 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 397957 /var/tmp/bperf.sock 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 397957 ']' 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:52.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:52.102 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.359 [2024-07-11 11:20:06.541085] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:52.359 [2024-07-11 11:20:06.541178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397957 ] 00:33:52.359 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:52.359 Zero copy mechanism will not be used. 00:33:52.359 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.359 [2024-07-11 11:20:06.598814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.359 [2024-07-11 11:20:06.683334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.359 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:52.359 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:52.359 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:52.359 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:52.359 11:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:52.927 11:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.927 11:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.185 nvme0n1 00:33:53.186 11:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:53.186 11:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:53.186 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.186 Zero copy mechanism will not be used. 00:33:53.186 Running I/O for 2 seconds... 00:33:55.719 00:33:55.719 Latency(us) 00:33:55.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.719 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:55.719 nvme0n1 : 2.00 5958.48 744.81 0.00 0.00 2677.77 1711.22 4514.70 00:33:55.719 =================================================================================================================== 00:33:55.719 Total : 5958.48 744.81 0.00 0.00 2677.77 1711.22 4514.70 00:33:55.719 0 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:55.719 | select(.opcode=="crc32c") 00:33:55.719 | "\(.module_name) \(.executed)"' 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 397957 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 397957 ']' 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 397957 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397957 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397957' 00:33:55.719 killing process with pid 397957 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 397957 00:33:55.719 Received shutdown signal, test time was about 2.000000 seconds 00:33:55.719 00:33:55.719 Latency(us) 00:33:55.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.719 =================================================================================================================== 00:33:55.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.719 11:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 397957 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 396596 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 396596 ']' 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 396596 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 396596 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 396596' 00:33:55.719 killing process with pid 396596 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 396596 00:33:55.719 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 396596 00:33:55.978 00:33:55.978 real 0m15.286s 00:33:55.978 user 0m30.128s 00:33:55.978 sys 0m4.338s 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:55.978 ************************************ 00:33:55.978 END TEST nvmf_digest_clean 00:33:55.978 ************************************ 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:55.978 ************************************ 00:33:55.978 START TEST nvmf_digest_error 00:33:55.978 ************************************ 00:33:55.978 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=398392 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 398392 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 398392 ']' 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:55.979 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.237 [2024-07-11 11:20:10.441764] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:56.237 [2024-07-11 11:20:10.441851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.237 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.237 [2024-07-11 11:20:10.504308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.237 [2024-07-11 11:20:10.586875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.237 [2024-07-11 11:20:10.586928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.237 [2024-07-11 11:20:10.586951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.237 [2024-07-11 11:20:10.586962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.237 [2024-07-11 11:20:10.586971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.237 [2024-07-11 11:20:10.586996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.237 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:56.237 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:56.237 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:56.237 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:56.237 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.496 [2024-07-11 11:20:10.671544] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.496 null0 00:33:56.496 [2024-07-11 11:20:10.779048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.496 [2024-07-11 11:20:10.803251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=398527 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 398527 /var/tmp/bperf.sock 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 398527 ']' 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:56.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:56.496 11:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.496 [2024-07-11 11:20:10.846514] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:33:56.496 [2024-07-11 11:20:10.846589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398527 ] 00:33:56.496 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.496 [2024-07-11 11:20:10.903501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.754 [2024-07-11 11:20:10.989128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.754 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:56.754 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:56.754 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:56.754 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.013 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:57.013 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.013 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.013 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.013 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.013 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.582 nvme0n1 00:33:57.582 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:57.583 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.583 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.583 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.583 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:57.583 11:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:57.583 Running I/O for 2 seconds... 00:33:57.583 [2024-07-11 11:20:11.949907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.583 [2024-07-11 11:20:11.949954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.583 [2024-07-11 11:20:11.949973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.583 [2024-07-11 11:20:11.962560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.583 [2024-07-11 11:20:11.962592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.583 [2024-07-11 11:20:11.962608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.583 [2024-07-11 11:20:11.974899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.583 [2024-07-11 11:20:11.974930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.583 [2024-07-11 11:20:11.974948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.583 [2024-07-11 11:20:11.989048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.583 [2024-07-11 11:20:11.989092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.583 [2024-07-11 11:20:11.989108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.583 [2024-07-11 11:20:12.001971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.583 [2024-07-11 11:20:12.002003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.583 [2024-07-11 11:20:12.002020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.014125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.014173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.014190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.026663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.026691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.026706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.041028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.041074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.041090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.051555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.051583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.051598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.066192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.066224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.066241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.077472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.077503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.077518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.091921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.091966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.091983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.106071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.106118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.117224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.117254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.117289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.129045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.129079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.141849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.141877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.141898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.156361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.156389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.841 [2024-07-11 11:20:12.156405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.841 [2024-07-11 11:20:12.167352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.841 [2024-07-11 11:20:12.167379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.167398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.180585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.180615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.180634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.193574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.193602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.193622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.206832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.206863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.217521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.217552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.217572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.230384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.230419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.230437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.244196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.244223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.244240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.842 [2024-07-11 11:20:12.257568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:57.842 [2024-07-11 11:20:12.257595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.842 [2024-07-11 11:20:12.257613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.269623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.269651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.269671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.281613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.281640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.281662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.295151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.295194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.295210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.307706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.307736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.307762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.320343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.320372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.320390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.332926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.332954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.332972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.345157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.345187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.345218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.357924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.357968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.357985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.370410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.370440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.370458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.384122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.384151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.384183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.396655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.396685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.396702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.408449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.408477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.408495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.423986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.424016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.424035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.438180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.438210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.438233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.449748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.449790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.449808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.461696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.461740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.461763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.473640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.473671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.473688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.488024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.488069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.488087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.500599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.500629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.500646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.100 [2024-07-11 11:20:12.512159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.100 [2024-07-11 11:20:12.512186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.100 [2024-07-11 11:20:12.512205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.525154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.525184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.525208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.539205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.539234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.539255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.552003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.552047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.552069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.563224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.563251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.563269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.577814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.577861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.577878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.592738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.592789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.592806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.608276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.608303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.608321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.619382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.619409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.619428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.632178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.632208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.632225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.645523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.645552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.645573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.660147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.660174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.660193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.673849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.673880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.673909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.686187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.686216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.686234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.696992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.697021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.697041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.709539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.709567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.709586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.723053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.723098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.723115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.735919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.735950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.735968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.748424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.748453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.748470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.760675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.760704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.760724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.359 [2024-07-11 11:20:12.773080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.359 [2024-07-11 11:20:12.773110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.359 [2024-07-11 11:20:12.773143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.618 [2024-07-11 11:20:12.785082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.618 [2024-07-11 11:20:12.785114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.785133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.797607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.797635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.797653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.810522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.810551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.810571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.822487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.822534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.836618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.836645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.836664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.848408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.848436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.848453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.860378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.860407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.860427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.873033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.873075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.873090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.887060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.887102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.887119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.898923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.898950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.898970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.909978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.910006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.910037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.924718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.924746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.924792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.939254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.939282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.939312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.950498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.950527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.950544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.966318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.966346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.966371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.979296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.979326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.979353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:12.991898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:12.991929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:12.991945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:13.004062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:13.004101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:13.004143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:13.017263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:13.017292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:13.017331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:13.029709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:13.029768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:13.029788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.619 [2024-07-11 11:20:13.041906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.619 [2024-07-11 11:20:13.041939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.619 [2024-07-11 11:20:13.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.054272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.054301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.054324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.067931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.067961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.067986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.082316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.082346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.082368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.092439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.092467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.092486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.106016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.106067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.106083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.120354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.120401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.120421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.132915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.132943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.132959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.145384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.145411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.145430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.157567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.157597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.157616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.170399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.170429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.170446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.182697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.182742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.182763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.195042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.195091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.195107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.206181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.206209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.206224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.220327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.220356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.220371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.232706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.232758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.243474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.243502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.243517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.256525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.256553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.256568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.270611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.270639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.270654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.879 [2024-07-11 11:20:13.283808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.879 [2024-07-11 11:20:13.283836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.879 [2024-07-11 11:20:13.283851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.880 [2024-07-11 11:20:13.295369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:58.880 [2024-07-11 11:20:13.295398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.880 [2024-07-11 11:20:13.295414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.307589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.307631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.307647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.321312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.321342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.321359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.333596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.333625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.333650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.345948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.345980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.345996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.357594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.357622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.357637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.369421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.369450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.369466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.382401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.382432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.382447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.395910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.395939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.395955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.408096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.408124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.408140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.421004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.421035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.421052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.432148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.432176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.432191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.445456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.445485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.445502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.459124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.459151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.459166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.472315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.140 [2024-07-11 11:20:13.472359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.140 [2024-07-11 11:20:13.472374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.140 [2024-07-11 11:20:13.482494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.482522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.482537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.141 [2024-07-11 11:20:13.496008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.496052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.141 [2024-07-11 11:20:13.511372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.511400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.511416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.141 [2024-07-11 11:20:13.524445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.524472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.524488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.141 [2024-07-11 11:20:13.536378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.536408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.536424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.141 [2024-07-11 11:20:13.550098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.550129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.141 [2024-07-11 11:20:13.561342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.141 [2024-07-11 11:20:13.561369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.141 [2024-07-11 11:20:13.561385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.575534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.575565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.575581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.585858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.585887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.585903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.600327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.600355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.600369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.616225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.616254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.616269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.628274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.628319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.628335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.640678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.640708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.652545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.652577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.652594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.664636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.664672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.664690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.676847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.676875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.676890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.689111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.689138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.689153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.702688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.702718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.702734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.714454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.714484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.714501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.728598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.728628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.728644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.739954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.739985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.740002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.753414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.753443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.753458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.767041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.767072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.767103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.778215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.778242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.778272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.794265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.794292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.794307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.805308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.805336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.805351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.401 [2024-07-11 11:20:13.818240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.401 [2024-07-11 11:20:13.818267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.401 [2024-07-11 11:20:13.818283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.831205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.831233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.831249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.846779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.846809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.846825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.859251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.859280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.859296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.872049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.872094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.872111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.883622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.883649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.883669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.896111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.896157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.896174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.910455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.910486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.910503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.921139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.921167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.921182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 [2024-07-11 11:20:13.934302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa39d0) 00:33:59.659 [2024-07-11 11:20:13.934333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.659 [2024-07-11 11:20:13.934349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.659 00:33:59.659 Latency(us) 00:33:59.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:59.659 nvme0n1 : 2.00 19907.18 77.76 0.00 0.00 6420.99 3519.53 18155.90 00:33:59.659 =================================================================================================================== 00:33:59.659 Total : 19907.18 77.76 0.00 0.00 6420.99 3519.53 18155.90 00:33:59.659 0 00:33:59.659 11:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:59.659 11:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:59.659 11:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:59.659 | .driver_specific 00:33:59.659 | .nvme_error 00:33:59.659 | .status_code 00:33:59.659 | .command_transient_transport_error' 00:33:59.659 11:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 398527 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 398527 ']' 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 398527 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 398527 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 398527' 00:33:59.917 killing process with pid 398527 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 398527 00:33:59.917 Received shutdown signal, test time was about 2.000000 seconds 00:33:59.917 00:33:59.917 Latency(us) 00:33:59.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.917 =================================================================================================================== 00:33:59.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:59.917 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 398527 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=398944 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 398944 /var/tmp/bperf.sock 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 398944 ']' 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:00.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:00.174 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.174 [2024-07-11 11:20:14.487070] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:00.175 [2024-07-11 11:20:14.487177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398944 ] 00:34:00.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:00.175 Zero copy mechanism will not be used. 00:34:00.175 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.175 [2024-07-11 11:20:14.546165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.432 [2024-07-11 11:20:14.632564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.432 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:00.432 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:00.432 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.432 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.690 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:00.690 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.690 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.690 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.691 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.691 11:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.262 nvme0n1 00:34:01.262 11:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:01.262 11:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.262 11:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.262 11:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.262 11:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:01.262 11:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:01.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:01.262 Zero copy mechanism will not be used. 00:34:01.262 Running I/O for 2 seconds... 00:34:01.262 [2024-07-11 11:20:15.567800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.567865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.567896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.572713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.572747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.572790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.577302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.577333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.577350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.582062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.582093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.582109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.586868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.586897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.586921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.591556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.591585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.591601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.596427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.596456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.596472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.602553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.602584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.602616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.610273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.610318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.610334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.617257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.617303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.617321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.624300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.624331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.624362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.631098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.631131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.631149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.637527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.637557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.637574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.643179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.643212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.643236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.649180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.649226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.649243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.655643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.655691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.661749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.661788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.661822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.668288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.668320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.668337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.674117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.674149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.262 [2024-07-11 11:20:15.674167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.262 [2024-07-11 11:20:15.679198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.262 [2024-07-11 11:20:15.679250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.263 [2024-07-11 11:20:15.679270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.263 [2024-07-11 11:20:15.682367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.263 [2024-07-11 11:20:15.682399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.263 [2024-07-11 11:20:15.682416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.584 [2024-07-11 11:20:15.686004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.686037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.686069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.689604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.689641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.689665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.693279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.693331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.693357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.697263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.697297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.697315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.702654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.702718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.702749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.708731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.708809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.708831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.714889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.714926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.714944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.719617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.719649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.719667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.724541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.724572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.724603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.729486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.729518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.729535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.734233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.734263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.734280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.738932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.738964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.738981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.743701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.743751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.748439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.748470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.748487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.754175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.754207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.754225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.759220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.759252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.759284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.764059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.764090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.764108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.769019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.769050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.769082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.774047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.774078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.774102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.778847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.778878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.778895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.783978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.784009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.784026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.789410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.789441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.789458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.797042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.797074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.797092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.803171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.803203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.803220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.809354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.809387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.809404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.813621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.813653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.813670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.818407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.818440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.818472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.824366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.824399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.824417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.829921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.829954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.829971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.835910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.835943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.835976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.841608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.841640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.841657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.847550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.847583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.847601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.852786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.852818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.852836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.857436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.857483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.862000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.862030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.862047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.867731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.867772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.872659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.872691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.872709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.877839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.877871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.877888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.883254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.883285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.883302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.890438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.890483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.890499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.895960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.895993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.896010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.901573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.901603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.901635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.906622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.906653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.906670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.911735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.911776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.911795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.917181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.917219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.917252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.921995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.922026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.922043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.926693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.926723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.926740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.931201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.931232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.931248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.936673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.936704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.936721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.941972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.942003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.942020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.946733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.946786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.946806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.951424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.951454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.585 [2024-07-11 11:20:15.951470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.585 [2024-07-11 11:20:15.956307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.585 [2024-07-11 11:20:15.956354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.956371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.961167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.961217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.961234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.966887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.966921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.966938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.973778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.973810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.973827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.979256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.979289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.979322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.984521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.984552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.984569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.990027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.990060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.990078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:15.994990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:15.995021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:15.995053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.586 [2024-07-11 11:20:16.000609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.586 [2024-07-11 11:20:16.000654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.586 [2024-07-11 11:20:16.000670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.005510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.005543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.005568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.010167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.010198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.010215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.015019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.015051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.015068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.019980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.020011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.020028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.025795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.025826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.025843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.033339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.033385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.033402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.039551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.039584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.039602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.045284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.045316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.045334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.050540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.050571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.050606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.055215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.055254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.055272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.060369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.060401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.060418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.065574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.065606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.065625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.068957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.068988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.069005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.072760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.072796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.072820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.077133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.861 [2024-07-11 11:20:16.077176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.861 [2024-07-11 11:20:16.077192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.861 [2024-07-11 11:20:16.081775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.081806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.081823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.086161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.086191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.086222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.091438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.091469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.091487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.096865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.096896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.096913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.101479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.101510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.101527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.106199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.106229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.106260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.110886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.110917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.110934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.115411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.115456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.115472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.120169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.120215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.120232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.124828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.124859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.124877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.129664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.129694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.129712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.134360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.134392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.134418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.139000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.139031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.139048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.143581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.143625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.143642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.148118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.148178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.152670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.152715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.152732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.157167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.157196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.157212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.161984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.162014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.162030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.166669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.166699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.166715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.172281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.172313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.172329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.177691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.177729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.177747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.184352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.184384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.184401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.191583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.191615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.191632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.197266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.197296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.197313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.203051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.203097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.203114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.207873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.207905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.207937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.212433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.212476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.212493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.217577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.217607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.217639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.222564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.862 [2024-07-11 11:20:16.222597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.862 [2024-07-11 11:20:16.222621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.862 [2024-07-11 11:20:16.228526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.228558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.228575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.234316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.234364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.239897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.239929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.239947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.244782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.244812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.244830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.249731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.249770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.249790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.255043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.255076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.255094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.260704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.260736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.260761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.267186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.267218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.267235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.272087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.272127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.272145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.276706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.276736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.863 [2024-07-11 11:20:16.281479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:01.863 [2024-07-11 11:20:16.281509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.863 [2024-07-11 11:20:16.281526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.139 [2024-07-11 11:20:16.286074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.139 [2024-07-11 11:20:16.286105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.139 [2024-07-11 11:20:16.286122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.139 [2024-07-11 11:20:16.290806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.139 [2024-07-11 11:20:16.290837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.139 [2024-07-11 11:20:16.290854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.139 [2024-07-11 11:20:16.295378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.139 [2024-07-11 11:20:16.295410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.139 [2024-07-11 11:20:16.295427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.299987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.300017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.300034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.304704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.304760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.309324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.309357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.309374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.314493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.314526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.314543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.319168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.319200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.319217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.323696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.323729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.323761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.328199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.328231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.328248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.332796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.332828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.332845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.337323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.337356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.337373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.341909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.341941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.341957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.346392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.346423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.346440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.350953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.350983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.351008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.355478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.355509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.355526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.359982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.360014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.360031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.364530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.364560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.364577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.369025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.369055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.369072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.373462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.373494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.373510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.378012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.378044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.378061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.382788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.382819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.382836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.387311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.387342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.387364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.393124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.393164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.393182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.396737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.396777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.396795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.401925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.401958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.401976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.408545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.408590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.408607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.414127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.414159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.414176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.420469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.420502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.420538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.426712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.426767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.426786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.432827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.432860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.140 [2024-07-11 11:20:16.432878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.140 [2024-07-11 11:20:16.438178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.140 [2024-07-11 11:20:16.438210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.438228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.443626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.443658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.443688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.450245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.450277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.450293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.456632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.456665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.456682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.462034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.462088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.466946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.466978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.466995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.472534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.472581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.472598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.479193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.479227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.479245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.486704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.486737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.486764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.492875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.492909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.492933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.497033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.497064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.497100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.504019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.504065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.504081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.511172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.511219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.511236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.517358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.517389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.517405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.523248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.523281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.523298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.528448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.528481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.528499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.533267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.533299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.533316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.538004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.538035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.538052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.543074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.543106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.543124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.548288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.548320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.548353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.553060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.553092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.553123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.141 [2024-07-11 11:20:16.557909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.141 [2024-07-11 11:20:16.557940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.141 [2024-07-11 11:20:16.557957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.562739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.562779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.562797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.567473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.567504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.567521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.572158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.572222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.576797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.576829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.576848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.581296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.581327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.581368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.586507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.586538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.586555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.593147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.593179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.593212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.600960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.600994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.601012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.608357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.608390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.608408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.616331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.616363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.616379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.623671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.623704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.623736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.630890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.630924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.630942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.637700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.637733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.637783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.645352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.645393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.645412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.651232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.651263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.651279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.655856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.655888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.655905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.660424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.660455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.660472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.664903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.664934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.664951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.669320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.669350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.669367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.673958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.673990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.674007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.679175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.679207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.679224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.683944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.683975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.683993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.688659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.688691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.688707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.693144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.693175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.693191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.697612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.697644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.697660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.702061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.702092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.702109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.706519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.706549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.706566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.416 [2024-07-11 11:20:16.710995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.416 [2024-07-11 11:20:16.711025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.416 [2024-07-11 11:20:16.711041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.715505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.715536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.715553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.720094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.720142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.724811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.724841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.724866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.729599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.729630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.734560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.734591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.734607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.739284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.739314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.739331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.743851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.743886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.743903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.749222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.749253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.749285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.756098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.756130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.756147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.763125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.763158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.763175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.769522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.769555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.769572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.776185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.776226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.776245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.782123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.782154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.782171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.788258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.788291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.788308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.793472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.793504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.793521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.798083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.798114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.798132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.803614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.803648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.803666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.809237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.809269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.809302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.814134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.814165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.814182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.818574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.818606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.818624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.822114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.822163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.822181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.828225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.828258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.828277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.832620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.832652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.832669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.417 [2024-07-11 11:20:16.837823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.417 [2024-07-11 11:20:16.837856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.417 [2024-07-11 11:20:16.837873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.842729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.842783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.842804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.847385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.847423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.847443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.850546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.850578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.850595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.853799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.853831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.853864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.858270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.858301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.858327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.862998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.863040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.863057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.868662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.868694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.868712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.873261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.873293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.873311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.878385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.878418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.878435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.883436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.883468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.883485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.887985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.888016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.888033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.892631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.892663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.892680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.898283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.898315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.898332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.905926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.905959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.905991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.913352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.913385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.913402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.920785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.920817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.920835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.928289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.928322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.935896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.935928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.935946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.943451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.943483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.943500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.951058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.951106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.951124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.958548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.958580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.958597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.966133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.966181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.966206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.973957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.973990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.974008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.981485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.981533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.981550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.989251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.989296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.989312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:16.996882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:16.996915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:16.996932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:17.004506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:17.004538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:17.004557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:17.012302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.680 [2024-07-11 11:20:17.012349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.680 [2024-07-11 11:20:17.012366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.680 [2024-07-11 11:20:17.018723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.018764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.018784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.023951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.023983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.024000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.028540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.028579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.028597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.033103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.033146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.033162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.037924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.037956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.037973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.042555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.042600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.042617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.047187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.047218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.047250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.051804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.051836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.051853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.056440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.056489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.056505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.061148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.061196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.061212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.066504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.066551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.066568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.073368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.073402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.073419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.080844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.080878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.080896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.086585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.086616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.086633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.092880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.092913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.092931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.681 [2024-07-11 11:20:17.098740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.681 [2024-07-11 11:20:17.098780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.681 [2024-07-11 11:20:17.098798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.104582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.104615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.104632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.110603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.110636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.110672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.116185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.116217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.116235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.121172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.121204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.121228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.125884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.130512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.130560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.130577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.135191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.135223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.135240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.139866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.139898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.139930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.144542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.144574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.144591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.149109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.149142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.149164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.153551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.153583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.153599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.158352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.158383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.158399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.163678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.163715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.163733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.169004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.169041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.169058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.173998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.174030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.174047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.178744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.178808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.183297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.183328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.183345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.187927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.187958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.187974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.192698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.192728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.192745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.197308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.197338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.197355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.202092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.202123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.202139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.206913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.206943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.206961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.211632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.211677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.211693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.216603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.216649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.944 [2024-07-11 11:20:17.216665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.944 [2024-07-11 11:20:17.221637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.944 [2024-07-11 11:20:17.221669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.221686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.227596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.227627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.227659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.235339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.235386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.235403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.242434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.242480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.242497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.249571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.249602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.249618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.255140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.255171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.255197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.260053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.260085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.260102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.264828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.264860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.264877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.270452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.270484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.270501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.277226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.277258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.277276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.282625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.282658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.282676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.287953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.287986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.288003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.293479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.293511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.293529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.300211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.300244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.300261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.307684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.307717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.307751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.314239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.314272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.314290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.320815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.320847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.320865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.324706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.324738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.324762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.328942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.328975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.333803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.333834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.333851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.338290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.338321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.338338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.342865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.342896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.342913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.347362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.347392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.347414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.351850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.351881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.351898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.356278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.356323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.356340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.361028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.361074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.361092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.945 [2024-07-11 11:20:17.366457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:02.945 [2024-07-11 11:20:17.366490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.945 [2024-07-11 11:20:17.366523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.372882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.372914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.372930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.378738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.378791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.378809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.384191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.384237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.384255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.389241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.389272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.389289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.394034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.394072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.394090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.399260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.399291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.399309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.406058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.406105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.406122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.411764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.411811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.411829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.416996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.417028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.417046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.421890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.421922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.421939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.425995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.426026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.430745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.430782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.430800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.435428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.435459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.435476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.439973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.440005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.440022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.444690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.444723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.444741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.447745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.447798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.447816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.452428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.452458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.452473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.456909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.206 [2024-07-11 11:20:17.456940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.206 [2024-07-11 11:20:17.456956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.206 [2024-07-11 11:20:17.461357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.461402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.461418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.465833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.465864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.465880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.470242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.470272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.470288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.474716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.474767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.474794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.479209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.479239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.479271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.483716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.483769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.483787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.488210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.488240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.488258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.493593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.493624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.493642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.497584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.497613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.497630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.502267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.502296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.502313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.508090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.508121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.508152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.512124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.512153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.512168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.516726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.516786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.516806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.523004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.523035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.523052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.529365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.529397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.529414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.536182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.536230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.536247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.542450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.542487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.542519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.548140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.548170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.553079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.553111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.553129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.207 [2024-07-11 11:20:17.557615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.207 [2024-07-11 11:20:17.557646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.207 [2024-07-11 11:20:17.557663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.208 [2024-07-11 11:20:17.562066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c4e3d0) 00:34:03.208 [2024-07-11 11:20:17.562097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.208 [2024-07-11 11:20:17.562128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.208 00:34:03.208 Latency(us) 00:34:03.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.208 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:03.208 nvme0n1 : 2.00 5832.40 729.05 0.00 0.00 2739.12 755.48 7961.41 00:34:03.208 =================================================================================================================== 00:34:03.208 Total : 5832.40 729.05 0.00 0.00 2739.12 755.48 7961.41 00:34:03.208 0 00:34:03.208 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:03.208 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:03.208 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:03.208 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:03.208 | .driver_specific 00:34:03.208 | .nvme_error 00:34:03.208 | .status_code 00:34:03.208 | .command_transient_transport_error' 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 376 > 0 )) 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 398944 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 398944 ']' 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 398944 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 398944 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 398944' 00:34:03.468 killing process with pid 398944 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 398944 00:34:03.468 Received shutdown signal, test time was about 2.000000 seconds 00:34:03.468 00:34:03.468 Latency(us) 00:34:03.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.468 =================================================================================================================== 00:34:03.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:03.468 11:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 398944 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=399349 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 399349 /var/tmp/bperf.sock 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 399349 ']' 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:03.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:03.727 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.727 [2024-07-11 11:20:18.130356] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:03.727 [2024-07-11 11:20:18.130449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399349 ] 00:34:03.985 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.985 [2024-07-11 11:20:18.191080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.985 [2024-07-11 11:20:18.272679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.985 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:03.985 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:03.985 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.985 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:04.244 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:04.244 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.244 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:04.502 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.502 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:04.502 11:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:04.761 nvme0n1 00:34:04.761 11:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:04.761 11:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.761 11:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:04.761 11:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.761 11:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:04.761 11:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:04.761 Running I/O for 2 seconds... 00:34:05.022 [2024-07-11 11:20:19.203939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190edd58 00:34:05.022 [2024-07-11 11:20:19.204885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-11 11:20:19.204925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:05.022 [2024-07-11 11:20:19.216193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e5220 00:34:05.022 [2024-07-11 11:20:19.217503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-11 11:20:19.217549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:05.022 [2024-07-11 11:20:19.230503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190dece0 00:34:05.022 [2024-07-11 11:20:19.232330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-11 11:20:19.232359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.022 [2024-07-11 11:20:19.238858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e84c0 00:34:05.022 [2024-07-11 11:20:19.239644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-11 11:20:19.239672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:05.022 [2024-07-11 11:20:19.251244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e3498 00:34:05.022 [2024-07-11 11:20:19.252260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-11 11:20:19.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:05.022 [2024-07-11 11:20:19.262407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190df988 00:34:05.022 [2024-07-11 11:20:19.263404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.022 [2024-07-11 11:20:19.263433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:05.022 [2024-07-11 11:20:19.274649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190fef90 00:34:05.022 [2024-07-11 11:20:19.275714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.275742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.286747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e23b8 00:34:05.023 [2024-07-11 11:20:19.288019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.288048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.299096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190f92c0 00:34:05.023 [2024-07-11 11:20:19.300522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.300567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.311558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190df988 00:34:05.023 [2024-07-11 11:20:19.313083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.313133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.323708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190ed4e8 00:34:05.023 [2024-07-11 11:20:19.325421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.325449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.335728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190ec408 00:34:05.023 [2024-07-11 11:20:19.337439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.337467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.343721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190efae0 00:34:05.023 [2024-07-11 11:20:19.344516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.344544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.356072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190f2948 00:34:05.023 [2024-07-11 11:20:19.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.357029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.368483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.023 [2024-07-11 11:20:19.369575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.369603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.380820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190f8a50 00:34:05.023 [2024-07-11 11:20:19.382141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.382169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.393265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190fc998 00:34:05.023 [2024-07-11 11:20:19.394598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.394627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.405575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190f2948 00:34:05.023 [2024-07-11 11:20:19.407116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.407146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.416900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.023 [2024-07-11 11:20:19.417174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.417226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.431168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.023 [2024-07-11 11:20:19.431434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.431474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.023 [2024-07-11 11:20:19.445176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.023 [2024-07-11 11:20:19.445376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.023 [2024-07-11 11:20:19.445402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.459128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.459411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.459438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.472822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.472983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.473012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.486887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.487115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.487142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.500971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.501199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.501225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.515004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.515268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.515312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.529164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.529405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.529449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.543249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.543527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.543569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.557471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.557694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.557738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.571520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.571767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.571795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.585471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.585709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.585736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.599610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.599851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.284 [2024-07-11 11:20:19.599877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.284 [2024-07-11 11:20:19.613808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.284 [2024-07-11 11:20:19.614109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.614138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.285 [2024-07-11 11:20:19.627963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.285 [2024-07-11 11:20:19.628196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.628224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.285 [2024-07-11 11:20:19.642056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.285 [2024-07-11 11:20:19.642287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.642315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.285 [2024-07-11 11:20:19.656039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.285 [2024-07-11 11:20:19.656258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.656301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.285 [2024-07-11 11:20:19.670178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.285 [2024-07-11 11:20:19.670406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.670434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.285 [2024-07-11 11:20:19.684153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.285 [2024-07-11 11:20:19.684435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.684463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.285 [2024-07-11 11:20:19.698188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.285 [2024-07-11 11:20:19.698409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.285 [2024-07-11 11:20:19.698437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.712061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.712306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.712333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.725538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.725785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.725815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.739472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.739844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.739873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.753728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.754027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.754069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.767741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.767937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.767966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.781111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.781308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.781343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.794846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.795029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.795057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.809017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.809257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.809285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.822976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.823232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.823259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.836932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.837128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.851040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.851331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.851375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.865241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.865475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.865502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.879312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.879559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.879585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.893377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.893604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.893631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.907589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.907845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.907881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.921628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.921926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.921953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.935851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.936067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.936093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.949814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.950050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.546 [2024-07-11 11:20:19.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.546 [2024-07-11 11:20:19.963760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.546 [2024-07-11 11:20:19.963982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.547 [2024-07-11 11:20:19.964008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:19.977202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:19.977422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:19.977464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:19.991095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:19.991319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:19.991345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.004850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.005068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.005106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.017769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.018004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.018035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.031053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.031250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.031293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.045015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.045293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.045335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.059066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.059282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.059310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.072992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.073216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.073244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.086862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.087078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.087107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.100554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.100784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.100828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.114386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.114631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.114657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.128139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.808 [2024-07-11 11:20:20.128436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.808 [2024-07-11 11:20:20.128479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.808 [2024-07-11 11:20:20.141956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.142297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.142333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.809 [2024-07-11 11:20:20.156089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.156357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.156384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.809 [2024-07-11 11:20:20.170151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.170409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.809 [2024-07-11 11:20:20.184290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.184594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.184622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.809 [2024-07-11 11:20:20.198287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.198499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.198540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.809 [2024-07-11 11:20:20.212364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.212641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.212667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:05.809 [2024-07-11 11:20:20.226235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:05.809 [2024-07-11 11:20:20.226489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:05.809 [2024-07-11 11:20:20.226531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.239891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.240106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.240132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.253911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.254165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.254208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.267967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.268185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.268228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.282039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.282256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.282297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.295942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.296100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.296127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.310275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.310502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.310545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.324397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.324646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.338454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.338699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.338741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.352475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.352704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.352729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.366668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.366898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.366939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.380747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.380998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.381027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.394822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.395156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.395184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.408772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.408982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.409009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.422794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.423002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.423043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.436899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.437154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.451041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.451252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.451277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.465025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.465317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.465360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.479222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.479447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.479474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.070 [2024-07-11 11:20:20.492855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.070 [2024-07-11 11:20:20.493016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.070 [2024-07-11 11:20:20.493063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.506992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.507205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.507237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.520993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.521241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.535131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.535357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.535401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.549348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.549602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.549644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.563274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.563521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.563548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.577417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.577705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.577733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.591573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.591816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.591842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.605594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.605871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.619672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.619880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.633823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.634027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.634067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.647701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.648014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.648058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.661750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.661972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.662015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.675624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.675876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.675903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.689723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.689953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.689995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.703802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.704046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.704087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.717888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.718160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.718201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.731766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.732042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.732084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.332 [2024-07-11 11:20:20.745455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.332 [2024-07-11 11:20:20.745613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.332 [2024-07-11 11:20:20.745641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.759415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.759648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.759675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.773297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.773568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.773610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.787380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.787586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.787611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.801328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.801630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.801672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.815311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.815557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.815583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.829303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.829524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.829565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.843387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.843594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.843619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.857002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.857283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.857326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.870882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.871116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.871148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.884805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.885063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.885106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.898931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.899210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.592 [2024-07-11 11:20:20.899236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.592 [2024-07-11 11:20:20.913003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.592 [2024-07-11 11:20:20.913237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:20.926775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:20.926946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.926974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:20.940551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:20.940772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.940799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:20.954541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:20.954827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.954854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:20.968549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:20.968824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.968852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:20.982034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:20.982341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.982383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:20.995418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:20.995672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:20.995714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.593 [2024-07-11 11:20:21.009325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.593 [2024-07-11 11:20:21.009612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.593 [2024-07-11 11:20:21.009639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.852 [2024-07-11 11:20:21.022912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.852 [2024-07-11 11:20:21.023157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.852 [2024-07-11 11:20:21.023185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.852 [2024-07-11 11:20:21.036738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.852 [2024-07-11 11:20:21.036997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.852 [2024-07-11 11:20:21.037039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.852 [2024-07-11 11:20:21.050777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.852 [2024-07-11 11:20:21.050994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.852 [2024-07-11 11:20:21.051022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.852 [2024-07-11 11:20:21.064867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.852 [2024-07-11 11:20:21.065149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.852 [2024-07-11 11:20:21.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.852 [2024-07-11 11:20:21.079089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.079319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.079345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.092886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.093161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.093201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.106886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.107087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.107114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.121128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.121422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.121449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.135292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.135535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.135561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.149461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.149698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.149739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.163510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.163805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.163845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.177602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.177843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.177870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 [2024-07-11 11:20:21.191467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394990) with pdu=0x2000190e0ea0 00:34:06.853 [2024-07-11 11:20:21.191706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.853 [2024-07-11 11:20:21.191767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.853 00:34:06.853 Latency(us) 00:34:06.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.853 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.853 nvme0n1 : 2.01 18583.57 72.59 0.00 0.00 6871.40 2815.62 16505.36 00:34:06.853 =================================================================================================================== 00:34:06.853 Total : 18583.57 72.59 0.00 0.00 6871.40 2815.62 16505.36 00:34:06.853 0 00:34:06.853 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:06.853 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:06.853 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:06.853 | .driver_specific 00:34:06.853 | .nvme_error 00:34:06.853 | .status_code 00:34:06.853 | .command_transient_transport_error' 00:34:06.853 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:07.111 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:34:07.111 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 399349 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 399349 ']' 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 399349 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 399349 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 399349' 00:34:07.112 killing process with pid 399349 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 399349 00:34:07.112 Received shutdown signal, test time was about 2.000000 seconds 00:34:07.112 00:34:07.112 Latency(us) 00:34:07.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.112 =================================================================================================================== 00:34:07.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:07.112 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 399349 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=399758 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 399758 /var/tmp/bperf.sock 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 399758 ']' 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:07.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.370 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.370 [2024-07-11 11:20:21.731170] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:07.370 [2024-07-11 11:20:21.731261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399758 ] 00:34:07.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:07.370 Zero copy mechanism will not be used. 00:34:07.370 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.370 [2024-07-11 11:20:21.788537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.629 [2024-07-11 11:20:21.874780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.629 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:07.629 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:07.629 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:07.629 11:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:07.887 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:07.887 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.887 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.887 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.887 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:07.887 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:08.147 nvme0n1 00:34:08.147 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:08.147 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.147 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.409 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.409 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:08.409 11:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:08.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:08.409 Zero copy mechanism will not be used. 00:34:08.409 Running I/O for 2 seconds... 00:34:08.409 [2024-07-11 11:20:22.683442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.683788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.683836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.689141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.689431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.689463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.694854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.695192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.695222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.700472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.700811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.700852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.705806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.706116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.706147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.712071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.712455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.712485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.718570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.718895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.718925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.724179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.724455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.724485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.729550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.729835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.729866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.735504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.735821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.735851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.742054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.742338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.748804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.749217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.749247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.756148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.756444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.756473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.761709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.762008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.762040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.767169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.771996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.772299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.772344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.776714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.777061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.409 [2024-07-11 11:20:22.777106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.409 [2024-07-11 11:20:22.782035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.409 [2024-07-11 11:20:22.782397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.782427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.787506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.787885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.787959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.792993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.793283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.798439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.798790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.798821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.803815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.804101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.809011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.809386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.813392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.813655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.813699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.818823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.819167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.819196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.824907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.825154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.825213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.410 [2024-07-11 11:20:22.831308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.410 [2024-07-11 11:20:22.831623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.410 [2024-07-11 11:20:22.831652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.837703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.838037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.838083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.844252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.844567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.844597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.850709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.851027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.851098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.857190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.857484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.857514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.861884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.862152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.862219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.866304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.866536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.866582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.870817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.871054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.871106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.875403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.875679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.875721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.879857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.880206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.880236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.885012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.885196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.890365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.890517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.890544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.896576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.896696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.896760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.902393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.902561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.908765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.908901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.908928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.915114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.915273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.915314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.921511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.921687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.673 [2024-07-11 11:20:22.921714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.673 [2024-07-11 11:20:22.926535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.673 [2024-07-11 11:20:22.926783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.926812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.931005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.931195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.931223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.935353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.935545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.935590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.939838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.940025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.940113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.944807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.944900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.944928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.949802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.949984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.954205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.954404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.954431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.958517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.958767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.958796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.962914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.963142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.963207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.967205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.967466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.967509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.971542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.971797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.971870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.975937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.976134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.976177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.980225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.980405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.980433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.984582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.984715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.984791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.988795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.988963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.989027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.993067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.993258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.993285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:22.997336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:22.997547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:22.997574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.001629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:23.001795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:23.001842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.005889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:23.006092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:23.006154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.010200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:23.010371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:23.010423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.014496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:23.014654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:23.014699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.018880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:23.019072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:23.019130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.023280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.674 [2024-07-11 11:20:23.023457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.674 [2024-07-11 11:20:23.023517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.674 [2024-07-11 11:20:23.027461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.027600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.027628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.031845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.031985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.032084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.036093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.036300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.036352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.040394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.040547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.040598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.044798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.044945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.045000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.049114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.049250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.049285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.053334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.053456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.053488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.057612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.057780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.057867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.061907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.062043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.062116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.066167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.066286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.066336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.070408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.070602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.070652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.075143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.075361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.075400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.080371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.080571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.080613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.085588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.085789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.085817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.675 [2024-07-11 11:20:23.091745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.675 [2024-07-11 11:20:23.091904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.675 [2024-07-11 11:20:23.091960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.096303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.096443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.096493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.100739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.100962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.105508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.105710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.105757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.110189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.110318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.110360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.114607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.114802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.114831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.118980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.119180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.119207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.123492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.123638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.123694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.127791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.127956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.128018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.133105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.133302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.133329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.137999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.138102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.138170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.144015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.144273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.144302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.149427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.149604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.149633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.154660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.154849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.154902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.159989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.160175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.165199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.165360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.165389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.937 [2024-07-11 11:20:23.170354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.937 [2024-07-11 11:20:23.170567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.937 [2024-07-11 11:20:23.170597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.175468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.175653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.175683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.181553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.181743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.181802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.187020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.187215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.187245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.192117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.192270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.192300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.197429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.197524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.197576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.203447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.203625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.203655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.208619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.208855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.208885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.214007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.214185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.214214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.219483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.219637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.219666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.224682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.224853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.224883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.230124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.230321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.230350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.235379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.235552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.240814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.240965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.240995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.246389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.246570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.246599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.251775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.251920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.257078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.257222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.257251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.262259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.262432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.262461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.267799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.268046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.268090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.272982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.273137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.273166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.278137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.278315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.278341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.283385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.283592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.283651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.288690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.288957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.288987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.294189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.294325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.294352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.299614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.299849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.299908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.304783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.304996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.305026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.310232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.310517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.315468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.315715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.315809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.320792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.321092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.321127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.938 [2024-07-11 11:20:23.325914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.938 [2024-07-11 11:20:23.326174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.938 [2024-07-11 11:20:23.326203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.939 [2024-07-11 11:20:23.331236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.939 [2024-07-11 11:20:23.331634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.939 [2024-07-11 11:20:23.331672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.939 [2024-07-11 11:20:23.336610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.939 [2024-07-11 11:20:23.336990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.939 [2024-07-11 11:20:23.337043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.939 [2024-07-11 11:20:23.342500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.939 [2024-07-11 11:20:23.342887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.939 [2024-07-11 11:20:23.342984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.939 [2024-07-11 11:20:23.347883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.939 [2024-07-11 11:20:23.348328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.939 [2024-07-11 11:20:23.348367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.939 [2024-07-11 11:20:23.353060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.939 [2024-07-11 11:20:23.353419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.939 [2024-07-11 11:20:23.353503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.939 [2024-07-11 11:20:23.358272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:08.939 [2024-07-11 11:20:23.358570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.939 [2024-07-11 11:20:23.358610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.363357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.363603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.363699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.368565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.368877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.368921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.373793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.374092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.378876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.379204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.379304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.384037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.384489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.384540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.389966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.390275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.395987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.396331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.396430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.400952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.401357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.201 [2024-07-11 11:20:23.401424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.201 [2024-07-11 11:20:23.405826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.201 [2024-07-11 11:20:23.406349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.406409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.410832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.411257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.411305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.415768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.416471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.416551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.420880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.421229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.421282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.426403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.426784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.426823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.431847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.432053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.432127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.438025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.438315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.438361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.444326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.444706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.444817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.449562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.449794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.449859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.454971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.455154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.455232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.460178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.460478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.460574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.465344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.465561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.465639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.470442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.471069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.471180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.476085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.476361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.476402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.482376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.482613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.482649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.487681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.488322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.492870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.493283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.493371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.497980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.498324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.498387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.502805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.503314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.503385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.507926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.508382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.508431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.513088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.513372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.513447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.518333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.518841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.518915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.523697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.524099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.524149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.529300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.529787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.534958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.535402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.535440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.540146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.540535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.540637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.545228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.545727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.545806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.550399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.550977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.551094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.555567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.555977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.202 [2024-07-11 11:20:23.556046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.202 [2024-07-11 11:20:23.561347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.202 [2024-07-11 11:20:23.561723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.561787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.566494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.566935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.566984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.571600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.572315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.572373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.576548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.577126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.577215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.581460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.581955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.581995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.586710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.587018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.587071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.592060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.592249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.592324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.597149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.597428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.597473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.602302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.602519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.602554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.608187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.608337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.608374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.614070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.614351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.614407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.619100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.203 [2024-07-11 11:20:23.619434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.203 [2024-07-11 11:20:23.619525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.203 [2024-07-11 11:20:23.624041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.624581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.629040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.629399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.629457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.634106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.634323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.634360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.639193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.639577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.639632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.644214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.644712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.644790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.649423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.649798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.649880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.654583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.655044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.655131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.659657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.660124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.660183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.664639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.664949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.665047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.669791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.670075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.670133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.674847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.675375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.675462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.679907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.680420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.680487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.685027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.685529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.685593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.690158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.690592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.690676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.695394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.695892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.695951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.700505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.700942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.700995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.705689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.706084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.706175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.710730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.711138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.711199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.715703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.716263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.716329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.464 [2024-07-11 11:20:23.720871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.464 [2024-07-11 11:20:23.721555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.464 [2024-07-11 11:20:23.721616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.726066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.726446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.726517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.730952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.731277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.731377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.736110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.736479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.736545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.741271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.741537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.746390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.746707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.746818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.751242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.751809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.756364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.756805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.756910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.761597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.761946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.762022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.766552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.766905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.766976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.771625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.772087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.772138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.776663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.777105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.777166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.781909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.782325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.782451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.787068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.787450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.787521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.792283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.792521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.792588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.797154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.797520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.797585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.802294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.802576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.802665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.807515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.808038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.812502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.813049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.813159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.817609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.818148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.818199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.822590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.823094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.823174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.827853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.828208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.828271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.832897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.833274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.833368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.838089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.838636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.838674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.843184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.843769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.843832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.848185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.848500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.848566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.853158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.853562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.853615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.858228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.858437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.858505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.863202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.863612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.863684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.868494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.868816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.465 [2024-07-11 11:20:23.868896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.465 [2024-07-11 11:20:23.873633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.465 [2024-07-11 11:20:23.874034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.466 [2024-07-11 11:20:23.874146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.466 [2024-07-11 11:20:23.878689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.466 [2024-07-11 11:20:23.878996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.466 [2024-07-11 11:20:23.879105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.466 [2024-07-11 11:20:23.883747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.466 [2024-07-11 11:20:23.884041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.466 [2024-07-11 11:20:23.884144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.888834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.889239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.889318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.893838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.894370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.894425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.898799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.899134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.899204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.904142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.904375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.904431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.909251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.909683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.909792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.914375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.914713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.914830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.919615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.920133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.920196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.924622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.925088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.925155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.929664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.930164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.930200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.934613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.934989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.935058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.939555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.940052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.940117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.944830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.945078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.945138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.949732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.950004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.950131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.954973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.955459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.959981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.960551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.960614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.965110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.965527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.965642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.970294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.970825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.975268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.975722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.975810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.980379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.980901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.980965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.985499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.985877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.985972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.990644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.991145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.991203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:23.995633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:23.996173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:23.996248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.000844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.001172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.001236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.005718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.005943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.006018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.010688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.011273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.011313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.015746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.016048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.016087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.021660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.021910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.021997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.026686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.026988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.027093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.031932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.032354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.032410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.036923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.037427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.037530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.042420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.042708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.042770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.048035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.048198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.053966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.054295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.054332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.059638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.059840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.059912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.065174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.065403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.065438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.070810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.070987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.071024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.724 [2024-07-11 11:20:24.076047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.724 [2024-07-11 11:20:24.076365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.724 [2024-07-11 11:20:24.076436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.081527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.081720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.081823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.086861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.087015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.087051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.092136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.092380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.092442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.097630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.097841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.097900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.102987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.103133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.103162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.108322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.108491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.108520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.113526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.113691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.113720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.118891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.119223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.119252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.124108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.124345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.124374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.129384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.129568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.129638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.134610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.134800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.134863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.139817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.140015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.140090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.725 [2024-07-11 11:20:24.144997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.725 [2024-07-11 11:20:24.145208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.725 [2024-07-11 11:20:24.145290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.150368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.150505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.150533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.155677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.155855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.155885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.160912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.161047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.161091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.166237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.166364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.166392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.171818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.171974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.172003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.176926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.177075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.177103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.182367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.182493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.182522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.187763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.187996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.188025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.192951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.193203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.193232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.198351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.198551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.198622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.203706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.203878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.203907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.209134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.209341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.209369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.214322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.214521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.214550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.219687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.219878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.219907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.225008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.225220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.225286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.230363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.230522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.230551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.235541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.235732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.235770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.240892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.241074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.241103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.246171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.246393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.246422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.251481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.251784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.256791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.256962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.256992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.261990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.262160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.262188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.267260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.267475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.267504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.272386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.272563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.272591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.277605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.277808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.277888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.282844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.282979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.283008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.288147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.288286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.288314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.293347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.293527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.293554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.298578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.298742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.298819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.983 [2024-07-11 11:20:24.303831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.983 [2024-07-11 11:20:24.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.983 [2024-07-11 11:20:24.304082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.309138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.309308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.309374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.314462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.314606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.314649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.319624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.319846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.319903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.324864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.325004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.325033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.330007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.330240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.330273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.335273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.335443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.335472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.340558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.340778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.340860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.345850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.346130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.346192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.351176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.351322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.351351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.356311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.356516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.356558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.361826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.361975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.362013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.367109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.367299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.367328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.372341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.372512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.372541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.377855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.377981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.378011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.383142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.383347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.383377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.388498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.388665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.388694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.393692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.393882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.393912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.399233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.399476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.399504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.984 [2024-07-11 11:20:24.404628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:09.984 [2024-07-11 11:20:24.404797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.984 [2024-07-11 11:20:24.404827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.410045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.410275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.415427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.415624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.415652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.420815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.420951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.420980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.426214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.426416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.426444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.431662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.431854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.431883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.436931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.437088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.437117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.442353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.442552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.442581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.447739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.448008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.448038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.452981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.453210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.453239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.458242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.458429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.458458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.464142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.464385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.464414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.469772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.469900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.469929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.474302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.474433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.474516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.478898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.479052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.479107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.483664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.483846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.483897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.488355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.488473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.488525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.492984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.493240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.493306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.497676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.497871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.502404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.502545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.502605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.507106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.507252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.507303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.511798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.511958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.511987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.516490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.516698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.516728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.521144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.521260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.521313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.526381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.526573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.526601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.531584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.531813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.536912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.537138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.537167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.543365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.543586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.543614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.548281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.548517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.552876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.553039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.553092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.557462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.557552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.557579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.563242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.563375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.563424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.567728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.567917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.567968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.572181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.572287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.572345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.576864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.577131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.577159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.582223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.582480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.582542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.587847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.588091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.588163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.593791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.593973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.594018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.598353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.598549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.598613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.603028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.603149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.603202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.607508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.607649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.607676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.612161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.612328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.612378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.617023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.617255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.617312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.243 [2024-07-11 11:20:24.621693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.243 [2024-07-11 11:20:24.621914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.243 [2024-07-11 11:20:24.621943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.626291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.626457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.626504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.631001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.631164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.631207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.635648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.635858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.635919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.640366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.640538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.645053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.645298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.645349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.649712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.649957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.650009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.654245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.654379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.654433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.658911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.659088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.659131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.244 [2024-07-11 11:20:24.664120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.244 [2024-07-11 11:20:24.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.244 [2024-07-11 11:20:24.664317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:10.504 [2024-07-11 11:20:24.669338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.504 [2024-07-11 11:20:24.669495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.504 [2024-07-11 11:20:24.669524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.504 [2024-07-11 11:20:24.675509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.504 [2024-07-11 11:20:24.675650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.504 [2024-07-11 11:20:24.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:10.504 [2024-07-11 11:20:24.680715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1394cd0) with pdu=0x2000190fef90 00:34:10.504 [2024-07-11 11:20:24.680876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.504 [2024-07-11 11:20:24.680906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:10.504 00:34:10.504 Latency(us) 00:34:10.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.504 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:10.504 nvme0n1 : 2.00 5994.73 749.34 0.00 0.00 2657.61 1868.99 7427.41 00:34:10.504 =================================================================================================================== 00:34:10.504 Total : 5994.73 749.34 0.00 0.00 2657.61 1868.99 7427.41 00:34:10.504 0 00:34:10.504 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:10.504 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:10.504 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:10.504 | .driver_specific 00:34:10.504 | .nvme_error 00:34:10.504 | .status_code 00:34:10.504 | .command_transient_transport_error' 00:34:10.504 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 399758 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 399758 ']' 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 399758 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 399758 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 399758' 00:34:10.764 killing process with pid 399758 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 399758 00:34:10.764 Received shutdown signal, test time was about 2.000000 seconds 00:34:10.764 00:34:10.764 Latency(us) 00:34:10.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.764 =================================================================================================================== 00:34:10.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.764 11:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 399758 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 398392 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 398392 ']' 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 398392 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 398392 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 398392' 00:34:11.022 killing process with pid 398392 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 398392 00:34:11.022 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 398392 00:34:11.281 00:34:11.281 real 0m15.061s 00:34:11.281 user 0m28.948s 00:34:11.281 sys 0m4.535s 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:11.281 ************************************ 00:34:11.281 END TEST nvmf_digest_error 00:34:11.281 ************************************ 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:11.281 rmmod nvme_tcp 00:34:11.281 rmmod nvme_fabrics 00:34:11.281 rmmod nvme_keyring 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 398392 ']' 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 398392 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 398392 ']' 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 398392 00:34:11.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (398392) - No such process 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 398392 is not found' 00:34:11.281 Process with pid 398392 is not found 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.281 11:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.179 11:20:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:13.179 00:34:13.179 real 0m34.747s 00:34:13.179 user 0m59.931s 00:34:13.179 sys 0m10.414s 00:34:13.179 11:20:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:13.179 11:20:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:13.179 ************************************ 00:34:13.179 END TEST nvmf_digest 00:34:13.179 ************************************ 00:34:13.438 11:20:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:13.438 11:20:27 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:13.438 11:20:27 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:13.438 11:20:27 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:13.438 11:20:27 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:13.438 11:20:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:13.438 11:20:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.438 11:20:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.438 ************************************ 00:34:13.438 START TEST nvmf_bdevperf 00:34:13.438 ************************************ 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:13.438 * Looking for test storage... 00:34:13.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.438 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.439 11:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:15.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.977 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:15.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:15.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:15.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:34:15.978 00:34:15.978 --- 10.0.0.2 ping statistics --- 00:34:15.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.978 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:34:15.978 00:34:15.978 --- 10.0.0.1 ping statistics --- 00:34:15.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.978 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=402103 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 402103 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 402103 ']' 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.978 11:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 [2024-07-11 11:20:29.990328] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:15.978 [2024-07-11 11:20:29.990403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.978 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.978 [2024-07-11 11:20:30.060582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:15.978 [2024-07-11 11:20:30.149950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.978 [2024-07-11 11:20:30.150004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.978 [2024-07-11 11:20:30.150032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.978 [2024-07-11 11:20:30.150053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.978 [2024-07-11 11:20:30.150063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.978 [2024-07-11 11:20:30.150228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.978 [2024-07-11 11:20:30.150331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.978 [2024-07-11 11:20:30.150334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 [2024-07-11 11:20:30.295714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 Malloc0 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.978 [2024-07-11 11:20:30.356583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.978 11:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.979 { 00:34:15.979 "params": { 00:34:15.979 "name": "Nvme$subsystem", 00:34:15.979 "trtype": "$TEST_TRANSPORT", 00:34:15.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.979 "adrfam": "ipv4", 00:34:15.979 "trsvcid": "$NVMF_PORT", 00:34:15.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.979 "hdgst": ${hdgst:-false}, 00:34:15.979 "ddgst": ${ddgst:-false} 00:34:15.979 }, 00:34:15.979 "method": "bdev_nvme_attach_controller" 00:34:15.979 } 00:34:15.979 EOF 00:34:15.979 )") 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:15.979 11:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:15.979 "params": { 00:34:15.979 "name": "Nvme1", 00:34:15.979 "trtype": "tcp", 00:34:15.979 "traddr": "10.0.0.2", 00:34:15.979 "adrfam": "ipv4", 00:34:15.979 "trsvcid": "4420", 00:34:15.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.979 "hdgst": false, 00:34:15.979 "ddgst": false 00:34:15.979 }, 00:34:15.979 "method": "bdev_nvme_attach_controller" 00:34:15.979 }' 00:34:16.239 [2024-07-11 11:20:30.403821] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:16.239 [2024-07-11 11:20:30.403903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402251 ] 00:34:16.239 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.239 [2024-07-11 11:20:30.466465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.239 [2024-07-11 11:20:30.559825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.519 Running I/O for 1 seconds... 00:34:17.465 00:34:17.465 Latency(us) 00:34:17.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.465 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:17.465 Verification LBA range: start 0x0 length 0x4000 00:34:17.465 Nvme1n1 : 1.01 8202.87 32.04 0.00 0.00 15542.82 3470.98 14175.19 00:34:17.465 =================================================================================================================== 00:34:17.465 Total : 8202.87 32.04 0.00 0.00 15542.82 3470.98 14175.19 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=402389 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:17.725 { 00:34:17.725 "params": { 00:34:17.725 "name": "Nvme$subsystem", 00:34:17.725 "trtype": "$TEST_TRANSPORT", 00:34:17.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.725 "adrfam": "ipv4", 00:34:17.725 "trsvcid": "$NVMF_PORT", 00:34:17.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.725 "hdgst": ${hdgst:-false}, 00:34:17.725 "ddgst": ${ddgst:-false} 00:34:17.725 }, 00:34:17.725 "method": "bdev_nvme_attach_controller" 00:34:17.725 } 00:34:17.725 EOF 00:34:17.725 )") 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:17.725 11:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:17.725 "params": { 00:34:17.725 "name": "Nvme1", 00:34:17.725 "trtype": "tcp", 00:34:17.725 "traddr": "10.0.0.2", 00:34:17.725 "adrfam": "ipv4", 00:34:17.725 "trsvcid": "4420", 00:34:17.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.725 "hdgst": false, 00:34:17.725 "ddgst": false 00:34:17.725 }, 00:34:17.725 "method": "bdev_nvme_attach_controller" 00:34:17.725 }' 00:34:17.725 [2024-07-11 11:20:32.031135] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:17.725 [2024-07-11 11:20:32.031207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402389 ] 00:34:17.725 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.725 [2024-07-11 11:20:32.089430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.984 [2024-07-11 11:20:32.176521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.984 Running I/O for 15 seconds... 00:34:21.290 11:20:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 402103 00:34:21.290 11:20:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:21.290 [2024-07-11 11:20:35.000657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.290 [2024-07-11 11:20:35.000710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.290 [2024-07-11 11:20:35.000771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.290 [2024-07-11 11:20:35.000790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.290 [2024-07-11 11:20:35.000807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.290 [2024-07-11 11:20:35.000822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.290 [2024-07-11 11:20:35.000838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.290 [2024-07-11 11:20:35.000852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.290 [2024-07-11 11:20:35.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.290 [2024-07-11 11:20:35.000884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.290 [2024-07-11 11:20:35.000899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.000914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.000929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.000945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.000960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.000974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.000990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.001975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.001988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.002003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.002017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.002043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.002057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.002086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.002106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.002120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.002132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.002145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.002157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.291 [2024-07-11 11:20:35.002170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.291 [2024-07-11 11:20:35.002182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.292 [2024-07-11 11:20:35.002848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.002977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.002991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.292 [2024-07-11 11:20:35.003373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.292 [2024-07-11 11:20:35.003385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:21.293 [2024-07-11 11:20:35.003902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.003975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.003988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:21.293 [2024-07-11 11:20:35.004456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c641f0 is same with the state(5) to be set 00:34:21.293 [2024-07-11 11:20:35.004486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:21.293 [2024-07-11 11:20:35.004496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:21.293 [2024-07-11 11:20:35.004506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54552 len:8 PRP1 0x0 PRP2 0x0 00:34:21.293 [2024-07-11 11:20:35.004518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004574] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c641f0 was disconnected and freed. reset controller. 00:34:21.293 [2024-07-11 11:20:35.004635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.293 [2024-07-11 11:20:35.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.293 [2024-07-11 11:20:35.004688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.294 [2024-07-11 11:20:35.004715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.294 [2024-07-11 11:20:35.004729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.294 [2024-07-11 11:20:35.004743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.294 [2024-07-11 11:20:35.004764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.294 [2024-07-11 11:20:35.004795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.294 [2024-07-11 11:20:35.004809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.007918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.007955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.008557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.008609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.008624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.008856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.009086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.009104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.009118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.012292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.021457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.021827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.021856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.021878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.022104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.022313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.022333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.022345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.025273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.034673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.035069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.035112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.035127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.035362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.035554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.035572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.035584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.038511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.047880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.048269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.048296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.048312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.048547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.048763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.048784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.048797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.051730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.060879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.061255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.061283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.061298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.061532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.061729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.061770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.061784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.064690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.073981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.074347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.074375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.074390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.074610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.074865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.074886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.074899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.077806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.087100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.087467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.087493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.087508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.087729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.087970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.087990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.088002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.090938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.100144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.100511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.100537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.100552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.100782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.100986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.101006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.101018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.103915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.113134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.113498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.113525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.294 [2024-07-11 11:20:35.113540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.294 [2024-07-11 11:20:35.113783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.294 [2024-07-11 11:20:35.113995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.294 [2024-07-11 11:20:35.114015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.294 [2024-07-11 11:20:35.114027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.294 [2024-07-11 11:20:35.116813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.294 [2024-07-11 11:20:35.126150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.294 [2024-07-11 11:20:35.126472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.294 [2024-07-11 11:20:35.126498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.126513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.126712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.126954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.126975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.126987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.129885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.139222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.139654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.139681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.139697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.139920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.140169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.140188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.140199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.143097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.152450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.152817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.152845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.152865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.153086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.153314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.153333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.153345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.156246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.165519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.165866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.165892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.165907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.166122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.166329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.166348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.166360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.169270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.178549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.178922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.178949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.178964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.179198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.179390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.179408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.179420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.182340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.191691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.192080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.192122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.192137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.192371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.192562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.192585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.192597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.195480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.204774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.205157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.205182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.205197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.205396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.205604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.205622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.205635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.208528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.217772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.218133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.218160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.218175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.218380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.218624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.218642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.218654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.221502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.295 [2024-07-11 11:20:35.230798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.295 [2024-07-11 11:20:35.231163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.295 [2024-07-11 11:20:35.231189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.295 [2024-07-11 11:20:35.231203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.295 [2024-07-11 11:20:35.231417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.295 [2024-07-11 11:20:35.231624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.295 [2024-07-11 11:20:35.231642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.295 [2024-07-11 11:20:35.231654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.295 [2024-07-11 11:20:35.234561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.243850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.244213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.244239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.244255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.244475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.244682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.244701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.244712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.247610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.256892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.257294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.257321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.257337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.257579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.257810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.257847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.257861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.261374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.270791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.271211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.271238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.271254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.271467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.271717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.271736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.271774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.274766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.283969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.284333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.284360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.284375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.284613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.284830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.284849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.284862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.287644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.297298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.297664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.297691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.297706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.297959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.298205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.298223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.298235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.301115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.310443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.310803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.310830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.310845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.311079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.311270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.311289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.311301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.314227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.323594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.323931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.323957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.323971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.324185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.324393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.324412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.324428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.327347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.336787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.337113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.337139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.337154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.337353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.337576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.337595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.337606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.340556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.350286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.350657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.350684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.350699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.350953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.351195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.351214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.351226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.354242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.363507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.363914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.363942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.363958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.364211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.364402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.364420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.364432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.367447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.296 [2024-07-11 11:20:35.376765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.296 [2024-07-11 11:20:35.377157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.296 [2024-07-11 11:20:35.377189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.296 [2024-07-11 11:20:35.377204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.296 [2024-07-11 11:20:35.377419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.296 [2024-07-11 11:20:35.377626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.296 [2024-07-11 11:20:35.377645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.296 [2024-07-11 11:20:35.377657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.296 [2024-07-11 11:20:35.380649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.390012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.390401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.390427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.390442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.390662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.390906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.390927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.390941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.393896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.403021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.403383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.403409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.403424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.403638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.403891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.403911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.403924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.406817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.416069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.416404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.416429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.416444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.416659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.416917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.416938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.416951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.419842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.429246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.429581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.429606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.429621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.429865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.430085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.430105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.430118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.433006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.442265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.442630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.442656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.442671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.442934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.443149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.443167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.443179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.446049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.455342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.455704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.455731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.455746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.455999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.456225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.456244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.456256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.459147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.468420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.468734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.468781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.468796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.469010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.469218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.469236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.469248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.472092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.481451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.481786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.481812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.481827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.482027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.482251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.482270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.482282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.485091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.494538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.494910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.494937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.494952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.495187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.495379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.495397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.495409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.498216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.507506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.507920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.507948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.507968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.508197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.508436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.508456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.508469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.511963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.297 [2024-07-11 11:20:35.520778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.297 [2024-07-11 11:20:35.521260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.297 [2024-07-11 11:20:35.521289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.297 [2024-07-11 11:20:35.521304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.297 [2024-07-11 11:20:35.521543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.297 [2024-07-11 11:20:35.521749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.297 [2024-07-11 11:20:35.521778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.297 [2024-07-11 11:20:35.521791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.297 [2024-07-11 11:20:35.524810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.533941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.534306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.534332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.534347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.534580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.534798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.534818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.534830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.537639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.546979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.547309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.547335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.547349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.547549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.547800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.547825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.547853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.550809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.560043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.560375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.560402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.560416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.560631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.560885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.560905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.560918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.563826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.573163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.573526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.573553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.573568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.573798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.574002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.574022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.574034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.576941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.586143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.586472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.586498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.586513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.586727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.586955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.586975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.586988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.589888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.599167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.599527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.599554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.599569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.599813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.600011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.600030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.600057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.602845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.612139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.612501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.612527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.612543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.612786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.612997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.613016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.613028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.615837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.625166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.625480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.625520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.625535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.625750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.625977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.625997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.626010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.628809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.638263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.638628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.638655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.638674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.638943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.639173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.639192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.639204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.642130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.651413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.651773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.651800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.651815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.652035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.652241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.652260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.652272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.655218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.298 [2024-07-11 11:20:35.664459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.298 [2024-07-11 11:20:35.664825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.298 [2024-07-11 11:20:35.664852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.298 [2024-07-11 11:20:35.664867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.298 [2024-07-11 11:20:35.665087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.298 [2024-07-11 11:20:35.665293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.298 [2024-07-11 11:20:35.665312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.298 [2024-07-11 11:20:35.665324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.298 [2024-07-11 11:20:35.668256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.299 [2024-07-11 11:20:35.677493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.299 [2024-07-11 11:20:35.677856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.299 [2024-07-11 11:20:35.677882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.299 [2024-07-11 11:20:35.677897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.299 [2024-07-11 11:20:35.678110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.299 [2024-07-11 11:20:35.678317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.299 [2024-07-11 11:20:35.678335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.299 [2024-07-11 11:20:35.678353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.299 [2024-07-11 11:20:35.681341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.299 [2024-07-11 11:20:35.690688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.299 [2024-07-11 11:20:35.691099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.299 [2024-07-11 11:20:35.691141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.299 [2024-07-11 11:20:35.691156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.299 [2024-07-11 11:20:35.691389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.299 [2024-07-11 11:20:35.691580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.299 [2024-07-11 11:20:35.691599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.299 [2024-07-11 11:20:35.691610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.299 [2024-07-11 11:20:35.694504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.299 [2024-07-11 11:20:35.704072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.299 [2024-07-11 11:20:35.704372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.299 [2024-07-11 11:20:35.704397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.299 [2024-07-11 11:20:35.704426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.299 [2024-07-11 11:20:35.704630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.299 [2024-07-11 11:20:35.704909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.299 [2024-07-11 11:20:35.704930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.299 [2024-07-11 11:20:35.704943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.560 [2024-07-11 11:20:35.707991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.560 [2024-07-11 11:20:35.717219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.560 [2024-07-11 11:20:35.717641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.560 [2024-07-11 11:20:35.717692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.560 [2024-07-11 11:20:35.717708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.560 [2024-07-11 11:20:35.717985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.560 [2024-07-11 11:20:35.718195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.560 [2024-07-11 11:20:35.718214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.560 [2024-07-11 11:20:35.718226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.560 [2024-07-11 11:20:35.721209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.730483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.730908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.730937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.730952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.731203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.731395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.731413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.731425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.734412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.743810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.744210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.744236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.744251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.744451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.744675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.744694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.744706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.747605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.756984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.757414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.757466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.757481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.757707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.757967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.757990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.758003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.761521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.770207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.770543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.770612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.770627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.770876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.771093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.771112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.771124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.774123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.783454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.783819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.783847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.783863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.784105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.784300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.784318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.784330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.787237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.796969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.797340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.797368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.797383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.797611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.797865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.797887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.797900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.801206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.810640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.811002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.811030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.811046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.811283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.811490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.811509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.811526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.814514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.824013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.824382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.824408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.824422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.824635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.824880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.824902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.824916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.827941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.837236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.837597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.837622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.837637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.837870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.838117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.838136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.838148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.841109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.850448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.850822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.850848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.561 [2024-07-11 11:20:35.850863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.561 [2024-07-11 11:20:35.851097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.561 [2024-07-11 11:20:35.851305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.561 [2024-07-11 11:20:35.851323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.561 [2024-07-11 11:20:35.851335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.561 [2024-07-11 11:20:35.854289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.561 [2024-07-11 11:20:35.863723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.561 [2024-07-11 11:20:35.864078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.561 [2024-07-11 11:20:35.864109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.864125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.864346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.864553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.864571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.864583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.867503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.876864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.877192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.877232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.877247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.877446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.877670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.877689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.877700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.880577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.889913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.890286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.890313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.890328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.890547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.890763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.890797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.890810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.893631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.902927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.903286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.903312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.903326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.903540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.903751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.903792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.903805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.906591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.915927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.916286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.916313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.916328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.916562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.916763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.916798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.916811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.919680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.929079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.929439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.929466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.929480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.929714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.929939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.929962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.929974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.932861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.942113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.942426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.942466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.942480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.942695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.942922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.942943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.942955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.945804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.955249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.955611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.955637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.955651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.955896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.956129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.956148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.956160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.959041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.968310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.968671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.968698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.968713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.968963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.969192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.969211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.969222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.562 [2024-07-11 11:20:35.972105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.562 [2024-07-11 11:20:35.981631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.562 [2024-07-11 11:20:35.982092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.562 [2024-07-11 11:20:35.982119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.562 [2024-07-11 11:20:35.982135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.562 [2024-07-11 11:20:35.982356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.562 [2024-07-11 11:20:35.982582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.562 [2024-07-11 11:20:35.982601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.562 [2024-07-11 11:20:35.982614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.824 [2024-07-11 11:20:35.985681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.824 [2024-07-11 11:20:35.994890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.824 [2024-07-11 11:20:35.995253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.824 [2024-07-11 11:20:35.995279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.824 [2024-07-11 11:20:35.995299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.824 [2024-07-11 11:20:35.995515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.824 [2024-07-11 11:20:35.995723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.824 [2024-07-11 11:20:35.995766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.824 [2024-07-11 11:20:35.995780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:35.998605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.007955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.008312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.008339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.008354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.008574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.008826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.008848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.008862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.012358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.021155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.021520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.021547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.021562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.021808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.022036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.022056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.022085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.025084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.034420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.034789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.034833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.034849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.035089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.035297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.035320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.035333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.038262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.047607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.047999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.048027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.048043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.048291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.048484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.048502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.048514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.051476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.060877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.061259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.061285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.061300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.061533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.061725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.061768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.061783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.064680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.074003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.074352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.074423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.074438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.074653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.074907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.074927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.074941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.077842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.087141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.087468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.087495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.087509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.087723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.087950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.087970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.087983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.090952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.100426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.100793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.100825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.100841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.101082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.101273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.101292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.101304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.104201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.113527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.113885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.113957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.113973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.114240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.114432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.114451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.114463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.117346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.126712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.825 [2024-07-11 11:20:36.127105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.825 [2024-07-11 11:20:36.127132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.825 [2024-07-11 11:20:36.127147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.825 [2024-07-11 11:20:36.127387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.825 [2024-07-11 11:20:36.127579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.825 [2024-07-11 11:20:36.127597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.825 [2024-07-11 11:20:36.127610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.825 [2024-07-11 11:20:36.130413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.825 [2024-07-11 11:20:36.139746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.140115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.140141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.140156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.140370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.140576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.140595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.140606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.143454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.152965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.153361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.153388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.153403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.153638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.153894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.153915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.153928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.156831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.166099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.166459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.166486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.166501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.166721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.166967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.166988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.167006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.169912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.179187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.179548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.179574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.179589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.179820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.180025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.180045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.180057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.183011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.192280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.192647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.192674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.192689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.192927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.193168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.193186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.193198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.196085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.205324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.205686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.205713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.205728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.205992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.206203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.206221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.206233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.209116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.218317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.218733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.218790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.218806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.219051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.219243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.219262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.219274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.222121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.231412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.231826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.231853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.231867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.232080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.232286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.232305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.232316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.826 [2024-07-11 11:20:36.235219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.826 [2024-07-11 11:20:36.244725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.826 [2024-07-11 11:20:36.245086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.826 [2024-07-11 11:20:36.245113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:21.826 [2024-07-11 11:20:36.245128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:21.826 [2024-07-11 11:20:36.245349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:21.826 [2024-07-11 11:20:36.245562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.826 [2024-07-11 11:20:36.245581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.826 [2024-07-11 11:20:36.245594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.087 [2024-07-11 11:20:36.248630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.087 [2024-07-11 11:20:36.257913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.087 [2024-07-11 11:20:36.258297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.087 [2024-07-11 11:20:36.258324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.087 [2024-07-11 11:20:36.258340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.087 [2024-07-11 11:20:36.258587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.087 [2024-07-11 11:20:36.258851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.087 [2024-07-11 11:20:36.258873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.087 [2024-07-11 11:20:36.258887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.087 [2024-07-11 11:20:36.262349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.087 [2024-07-11 11:20:36.271138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.087 [2024-07-11 11:20:36.271503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.271530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.271545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.271791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.272003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.272023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.272051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.275045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.284382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.284698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.284724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.284764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.285008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.285252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.285271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.285283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.288091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.297420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.297782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.297809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.297823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.298076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.298273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.298307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.298323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.301220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.310537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.310918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.310945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.310960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.311180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.311386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.311405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.311416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.314225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.323704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.324075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.324101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.324116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.324336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.324542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.324560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.324572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.327380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.336667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.337058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.337101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.337116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.337349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.337541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.337559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.337571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.340479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.349730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.350155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.350190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.350207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.350455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.350662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.350681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.350693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.353608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.362792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.363154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.363180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.363194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.363408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.363616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.363635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.363647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.366532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.375809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.376222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.376274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.376289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.376514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.376706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.376724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.376737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.379657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.389094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.389459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.389486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.389500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.389721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.389961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.389980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.088 [2024-07-11 11:20:36.389992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.088 [2024-07-11 11:20:36.392978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.088 [2024-07-11 11:20:36.402118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.088 [2024-07-11 11:20:36.402451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.088 [2024-07-11 11:20:36.402521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.088 [2024-07-11 11:20:36.402536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.088 [2024-07-11 11:20:36.402750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.088 [2024-07-11 11:20:36.402965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.088 [2024-07-11 11:20:36.402984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.402996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.405910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.415334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.415663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.415689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.415704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.415952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.416183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.416201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.416214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.419088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.428596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.428928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.428955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.428970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.429187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.429395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.429413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.429425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.432398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.441727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.442135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.442162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.442177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.442396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.442602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.442621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.442632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.445518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.455012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.455410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.455436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.455451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.455671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.455911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.455932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.455945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.458831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.468159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.468519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.468546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.468561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.468808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.469028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.469048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.469075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.471961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.481251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.481615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.481642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.481662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.481902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.482121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.482140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.482153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.485041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.494337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.494700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.494726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.494742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.495004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.495230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.495249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.495260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.089 [2024-07-11 11:20:36.498144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.089 [2024-07-11 11:20:36.507567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.089 [2024-07-11 11:20:36.507934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.089 [2024-07-11 11:20:36.507963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.089 [2024-07-11 11:20:36.507978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.089 [2024-07-11 11:20:36.508214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.089 [2024-07-11 11:20:36.508425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.089 [2024-07-11 11:20:36.508445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.089 [2024-07-11 11:20:36.508456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.512005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.520923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.521330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.521357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.521372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.521607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.521827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.521852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.521866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.524921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.533991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.534369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.534395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.534409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.534609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.534860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.534880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.534893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.537673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.547125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.547453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.547480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.547495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.547709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.547943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.547963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.547975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.550937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.560152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.560470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.560495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.560510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.560710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.560951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.560971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.560984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.563880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.573338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.573706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.573733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.573748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.574001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.574228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.574246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.574258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.577143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.586378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.586738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.586771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.586787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.587021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.587228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.587247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.587259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.590066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.599445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.599813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.599841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.599857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.600091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.600283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.600301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.600313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.603221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.612458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.612820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.612847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.612862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.613101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.613293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.613311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.613323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.616268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.625587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.626008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.626036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.626052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.626302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.626494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.626512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.626523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.629487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.638927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.639295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.639321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.639336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.639570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.639788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.639824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.639837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.642676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.652071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.652456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.652482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.652497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.652731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.652957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.652977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.652995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.655914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.665083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.665443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.665470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.665485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.665705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.665943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.665964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.665976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.668856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.678106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.678465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.678491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.678505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.678718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.678949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.678971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.678984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.681921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.691251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.691562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.691603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.691617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.691857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.692086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.692104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.692116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.694907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.704393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.704761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.704788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.704803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.705037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.348 [2024-07-11 11:20:36.705229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.348 [2024-07-11 11:20:36.705247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.348 [2024-07-11 11:20:36.705259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.348 [2024-07-11 11:20:36.708079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.348 [2024-07-11 11:20:36.717363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.348 [2024-07-11 11:20:36.717727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.348 [2024-07-11 11:20:36.717782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.348 [2024-07-11 11:20:36.717799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.348 [2024-07-11 11:20:36.718013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.349 [2024-07-11 11:20:36.718220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.349 [2024-07-11 11:20:36.718239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.349 [2024-07-11 11:20:36.718250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.349 [2024-07-11 11:20:36.721093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.349 [2024-07-11 11:20:36.730583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.349 [2024-07-11 11:20:36.730935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-07-11 11:20:36.730962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.349 [2024-07-11 11:20:36.730977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.349 [2024-07-11 11:20:36.731209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.349 [2024-07-11 11:20:36.731416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.349 [2024-07-11 11:20:36.731435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.349 [2024-07-11 11:20:36.731447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.349 [2024-07-11 11:20:36.734343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.349 [2024-07-11 11:20:36.743757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.349 [2024-07-11 11:20:36.744119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-07-11 11:20:36.744146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.349 [2024-07-11 11:20:36.744161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.349 [2024-07-11 11:20:36.744396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.349 [2024-07-11 11:20:36.744592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.349 [2024-07-11 11:20:36.744611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.349 [2024-07-11 11:20:36.744623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.349 [2024-07-11 11:20:36.747535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.349 [2024-07-11 11:20:36.756818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.349 [2024-07-11 11:20:36.757195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-07-11 11:20:36.757221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.349 [2024-07-11 11:20:36.757235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.349 [2024-07-11 11:20:36.757435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.349 [2024-07-11 11:20:36.757658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.349 [2024-07-11 11:20:36.757677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.349 [2024-07-11 11:20:36.757689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.349 [2024-07-11 11:20:36.760600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.349 [2024-07-11 11:20:36.770276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.349 [2024-07-11 11:20:36.770659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.349 [2024-07-11 11:20:36.770687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.349 [2024-07-11 11:20:36.770717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.349 [2024-07-11 11:20:36.770968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.349 [2024-07-11 11:20:36.771200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.349 [2024-07-11 11:20:36.771219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.349 [2024-07-11 11:20:36.771245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.774302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.783457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.783837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.783864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.783879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.784086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.784310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.784328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.784340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.787238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.796458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.796822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.796849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.796864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.797085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.797292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.797310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.797322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.800253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.809503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.809821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.809848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.809863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.810062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.810285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.810304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.810316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.813122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.822643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.823039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.823081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.823096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.823330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.823522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.823540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.823552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.826459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.835662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.836049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.836095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.836111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.836338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.836530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.836548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.836560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.839456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.848730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.849097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.849123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.849138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.849372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.849564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.849583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.849595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.852563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.861992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.862390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.862417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.862432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.862667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.862886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.862906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.862919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.865841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.875300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.875666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.875693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.875708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.609 [2024-07-11 11:20:36.875969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.609 [2024-07-11 11:20:36.876184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.609 [2024-07-11 11:20:36.876204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.609 [2024-07-11 11:20:36.876216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.609 [2024-07-11 11:20:36.879126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.609 [2024-07-11 11:20:36.888441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.609 [2024-07-11 11:20:36.888801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.609 [2024-07-11 11:20:36.888844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.609 [2024-07-11 11:20:36.888860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.889101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.889308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.889327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.889339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.892301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.901717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.902077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.902146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.902162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.902376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.902583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.902601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.902613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.905530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.915089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.915535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.915587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.915602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.915845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.916062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.916097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.916110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.919224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.928283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.928646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.928674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.928689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.928952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.929163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.929181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.929193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.932082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.941787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.942209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.942267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.942283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.942535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.942747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.942776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.942805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.946113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.955411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.955818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.955846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.955862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.956090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.956311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.956331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.956343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.959397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.968824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.969182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.969223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.969242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.969489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.969681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.969699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.969711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.972779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.982069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.982397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.982438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.982453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.982674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.982894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.982914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.982927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.985910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:36.995211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:36.995574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:36.995616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:36.995631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:36.995891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:36.996103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:36.996122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:36.996134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:36.999006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:37.008376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:37.008687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:37.008781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:37.008799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:37.009031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:37.009239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:37.009264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:37.009277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:37.012135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.610 [2024-07-11 11:20:37.021643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.610 [2024-07-11 11:20:37.022103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.610 [2024-07-11 11:20:37.022145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.610 [2024-07-11 11:20:37.022161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.610 [2024-07-11 11:20:37.022412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.610 [2024-07-11 11:20:37.022623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.610 [2024-07-11 11:20:37.022642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.610 [2024-07-11 11:20:37.022654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.610 [2024-07-11 11:20:37.025605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.035146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.035623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.035677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.035692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.035955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.036167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.036200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.036213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.039230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.048239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.048634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.048661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.048676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.048947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.049176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.049195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.049207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.052127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.061200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.061570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.061612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.061628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.061891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.062123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.062142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.062153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.065020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.074199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.074687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.074728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.074744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.074991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.075216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.075235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.075247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.078127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.087201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.087674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.087725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.087740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.088014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.088244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.088263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.088275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.091157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.100303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.100666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.100693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.100708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.100977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.101202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.101221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.101233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.104151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.113344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.113769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.113810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.113826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.114065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.114272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.114291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.114303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.117213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.126483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.126859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.126886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.870 [2024-07-11 11:20:37.126901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.870 [2024-07-11 11:20:37.127115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.870 [2024-07-11 11:20:37.127323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.870 [2024-07-11 11:20:37.127341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.870 [2024-07-11 11:20:37.127353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.870 [2024-07-11 11:20:37.130260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.870 [2024-07-11 11:20:37.139505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.870 [2024-07-11 11:20:37.139906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.870 [2024-07-11 11:20:37.139933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.139948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.140170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.140378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.140396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.140413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.143343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.152628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.152998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.153041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.153056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.153308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.153514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.153532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.153544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.156458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.165815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.166221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.166262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.166278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.166517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.166723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.166741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.166761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.169605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.178985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.179393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.179435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.179451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.179690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.179924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.179945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.179957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.182912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.192224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.192585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.192632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.192648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.192923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.193140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.193160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.193172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.196110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.205422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.205834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.205863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.205879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.206107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.206314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.206332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.206344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.209262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.218667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.219122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.219164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.219179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.219434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.219640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.219659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.219670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.222536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.231943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.232292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.232357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.232372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.232618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.232860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.232880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.232893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.235873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.245173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.245552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.245593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.245608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.245864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.246057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.246076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.246088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.248974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.258384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.258715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.258742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.258765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.258989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.259212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.259231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.259244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.262174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.271622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.272014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.871 [2024-07-11 11:20:37.272042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.871 [2024-07-11 11:20:37.272058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.871 [2024-07-11 11:20:37.272295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.871 [2024-07-11 11:20:37.272502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.871 [2024-07-11 11:20:37.272521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.871 [2024-07-11 11:20:37.272534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.871 [2024-07-11 11:20:37.275486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:22.871 [2024-07-11 11:20:37.284794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:22.871 [2024-07-11 11:20:37.285239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.872 [2024-07-11 11:20:37.285281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:22.872 [2024-07-11 11:20:37.285298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:22.872 [2024-07-11 11:20:37.285537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:22.872 [2024-07-11 11:20:37.285744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:22.872 [2024-07-11 11:20:37.285770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:22.872 [2024-07-11 11:20:37.285783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:22.872 [2024-07-11 11:20:37.288581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.298042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.298500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.298541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.298557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.298832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.299026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.299044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.299056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.302003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.311124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.311449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.311476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.311491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.311711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.311926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.311946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.311958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.314882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.324422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.324720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.324766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.324787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.325002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.325210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.325229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.325240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.328047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.337495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.337918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.337945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.337960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.338194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.338401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.338420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.338432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.341212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.350733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.351115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.351143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.351158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.351379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.351587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.351605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.351617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.354504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.363791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.364218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.364259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.364276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.364512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.364720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.364742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.364763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.367647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.377235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.377598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.377640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.377655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.377904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.378131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.378149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.378161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.381198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.132 [2024-07-11 11:20:37.390516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.132 [2024-07-11 11:20:37.390859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.132 [2024-07-11 11:20:37.390887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.132 [2024-07-11 11:20:37.390903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.132 [2024-07-11 11:20:37.391130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.132 [2024-07-11 11:20:37.391337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.132 [2024-07-11 11:20:37.391355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.132 [2024-07-11 11:20:37.391367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.132 [2024-07-11 11:20:37.394431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.403879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.404313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.404355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.404371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.404608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.404859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.404880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.404893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.407885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.416980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.417375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.417402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.417417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.417639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.417890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.417910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.417922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.420802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.430019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.430387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.430429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.430444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.430696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.430945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.430967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.430979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.433855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.443064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.443501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.443528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.443544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.443793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.443992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.444011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.444023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.446825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.456221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.456611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.456637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.456657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.456925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.457153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.457172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.457184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.460056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.469269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.469679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.469728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.469743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.470016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.470225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.470243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.470255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.473050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.482377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.482736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.482785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.482802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.483042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.483267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.483286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.483297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.486102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.495426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.495913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.495954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.495970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.496197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.496388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.496411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.496424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.499300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.508575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.508945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.508986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.509002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.509254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.509461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.509479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.509491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.512384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.521905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.522331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.522357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.522389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.522629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.522841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.522861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.522874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.133 [2024-07-11 11:20:37.525843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.133 [2024-07-11 11:20:37.535251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.133 [2024-07-11 11:20:37.535740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.133 [2024-07-11 11:20:37.535789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.133 [2024-07-11 11:20:37.535806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.133 [2024-07-11 11:20:37.536070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.133 [2024-07-11 11:20:37.536262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.133 [2024-07-11 11:20:37.536280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.133 [2024-07-11 11:20:37.536292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.134 [2024-07-11 11:20:37.539189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.134 [2024-07-11 11:20:37.548322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.134 [2024-07-11 11:20:37.548716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.134 [2024-07-11 11:20:37.548743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.134 [2024-07-11 11:20:37.548782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.134 [2024-07-11 11:20:37.549025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.134 [2024-07-11 11:20:37.549235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.134 [2024-07-11 11:20:37.549253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.134 [2024-07-11 11:20:37.549265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.134 [2024-07-11 11:20:37.552324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.394 [2024-07-11 11:20:37.561556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.394 [2024-07-11 11:20:37.561992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.394 [2024-07-11 11:20:37.562034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.394 [2024-07-11 11:20:37.562050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.394 [2024-07-11 11:20:37.562289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.394 [2024-07-11 11:20:37.562496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.394 [2024-07-11 11:20:37.562514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.562526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.565415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.574615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.574946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.574972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.574986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.575186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.575411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.575429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.575441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.578253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.587618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.588044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.588070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.588101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.588345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.588552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.588571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.588582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.591388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.600725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.601095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.601138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.601154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.601404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.601611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.601629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.601641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.604446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.613856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.614165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.614192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.614206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.614406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.614613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.614631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.614643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.617536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.626945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.627296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.627376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.627391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.627624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.627859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.627879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.627896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.630761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.639979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.640364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.640390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.640405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.640623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.640860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.640880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.640893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.643789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.653107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.653469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.653510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.653526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.653801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.654006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.654026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.654038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.656913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.666203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.666692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.666718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.666749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.666998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.667223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.667241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.667253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.670137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.679172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.679500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.679533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.679549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.679780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.679993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.680013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.680025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.682962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.395 [2024-07-11 11:20:37.692291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.395 [2024-07-11 11:20:37.692654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.395 [2024-07-11 11:20:37.692695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.395 [2024-07-11 11:20:37.692711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.395 [2024-07-11 11:20:37.692973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.395 [2024-07-11 11:20:37.693183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.395 [2024-07-11 11:20:37.693201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.395 [2024-07-11 11:20:37.693213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.395 [2024-07-11 11:20:37.696172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.705372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.705744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.705805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.705838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.706090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.706281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.706299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.706311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.709117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.718444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.718808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.718851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.718866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.719118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.719329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.719348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.719359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.722250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.731528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.731897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.731940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.731956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.732208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.732413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.732431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.732443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.735362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.744651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.745019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.745062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.745077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.745328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.745534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.745553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.745565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.748458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.757651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.758022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.758064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.758080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.758331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.758537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.758555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.758567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.761485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.770660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.771027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.771054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.771069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.771317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.771553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.771574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.771587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.775111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.783839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.784254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.784280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.784311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.784546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.784760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.784780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.784792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.787650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.797088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.797514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.797554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.797570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.797820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.798018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.798037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.798049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.800930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.396 [2024-07-11 11:20:37.810068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.396 [2024-07-11 11:20:37.810431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.396 [2024-07-11 11:20:37.810472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.396 [2024-07-11 11:20:37.810492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.396 [2024-07-11 11:20:37.810765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.396 [2024-07-11 11:20:37.810979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.396 [2024-07-11 11:20:37.810998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.396 [2024-07-11 11:20:37.811010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.396 [2024-07-11 11:20:37.813991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.656 [2024-07-11 11:20:37.823236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.656 [2024-07-11 11:20:37.823726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.656 [2024-07-11 11:20:37.823776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.656 [2024-07-11 11:20:37.823796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.656 [2024-07-11 11:20:37.824017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.656 [2024-07-11 11:20:37.824242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.656 [2024-07-11 11:20:37.824261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.656 [2024-07-11 11:20:37.824274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.656 [2024-07-11 11:20:37.827180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.656 [2024-07-11 11:20:37.836239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.656 [2024-07-11 11:20:37.836569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.656 [2024-07-11 11:20:37.836596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.656 [2024-07-11 11:20:37.836611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.656 [2024-07-11 11:20:37.836865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.656 [2024-07-11 11:20:37.837094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.656 [2024-07-11 11:20:37.837113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.656 [2024-07-11 11:20:37.837125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.656 [2024-07-11 11:20:37.839896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.656 [2024-07-11 11:20:37.849244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.656 [2024-07-11 11:20:37.849606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.656 [2024-07-11 11:20:37.849632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.656 [2024-07-11 11:20:37.849647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.656 [2024-07-11 11:20:37.849889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.656 [2024-07-11 11:20:37.850117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.656 [2024-07-11 11:20:37.850140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.656 [2024-07-11 11:20:37.850153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.656 [2024-07-11 11:20:37.852958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.656 [2024-07-11 11:20:37.862326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.656 [2024-07-11 11:20:37.862688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.656 [2024-07-11 11:20:37.862730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.656 [2024-07-11 11:20:37.862745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.656 [2024-07-11 11:20:37.862997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.863240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.863259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.863271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.866041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.875331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.875821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.875862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.875879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.876128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.876335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.876354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.876366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.879283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.888519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.888951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.888978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.889008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.889247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.889454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.889472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.889483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.892385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.901562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.901930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.901993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.902008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.902242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.902450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.902469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.902481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.905334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.914663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.915004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.915031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.915046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.915267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.915475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.915493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.915506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.918418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.927813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.928302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.928342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.928357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.928608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.928841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.928861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.928874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.931656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.940959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.941273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.941299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.941315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.941540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.941748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.941790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.941803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.944702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.954127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.954491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.954532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.954548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.954810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.955007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.955026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.955039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.957920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.967251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.967735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.967792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.967808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.968063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.968272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.968291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.968303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.971110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.980484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.980909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.980937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.980953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 [2024-07-11 11:20:37.981193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.981400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.981420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.981436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 [2024-07-11 11:20:37.984401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.657 [2024-07-11 11:20:37.993759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.657 [2024-07-11 11:20:37.994074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.657 [2024-07-11 11:20:37.994101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.657 [2024-07-11 11:20:37.994116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 402103 Killed "${NVMF_APP[@]}" "$@" 00:34:23.657 [2024-07-11 11:20:37.994373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.657 [2024-07-11 11:20:37.994571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.657 [2024-07-11 11:20:37.994589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.657 [2024-07-11 11:20:37.994602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.657 11:20:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:23.657 11:20:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:23.657 11:20:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:23.657 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:23.657 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 [2024-07-11 11:20:37.997771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=403065 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 403065 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 403065 ']' 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:23.658 11:20:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 [2024-07-11 11:20:38.007252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.658 [2024-07-11 11:20:38.007619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.658 [2024-07-11 11:20:38.007661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.658 [2024-07-11 11:20:38.007677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.658 [2024-07-11 11:20:38.007915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.658 [2024-07-11 11:20:38.008156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.658 [2024-07-11 11:20:38.008175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.658 [2024-07-11 11:20:38.008192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.658 [2024-07-11 11:20:38.011320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.658 [2024-07-11 11:20:38.020557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.658 [2024-07-11 11:20:38.020938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.658 [2024-07-11 11:20:38.020966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.658 [2024-07-11 11:20:38.020981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.658 [2024-07-11 11:20:38.021194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.658 [2024-07-11 11:20:38.021450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.658 [2024-07-11 11:20:38.021471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.658 [2024-07-11 11:20:38.021483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.658 [2024-07-11 11:20:38.024823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.658 [2024-07-11 11:20:38.033901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.658 [2024-07-11 11:20:38.034301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.658 [2024-07-11 11:20:38.034344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.658 [2024-07-11 11:20:38.034361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.658 [2024-07-11 11:20:38.034591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.658 [2024-07-11 11:20:38.034837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.658 [2024-07-11 11:20:38.034859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.658 [2024-07-11 11:20:38.034872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.658 [2024-07-11 11:20:38.037919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.658 [2024-07-11 11:20:38.041947] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:23.658 [2024-07-11 11:20:38.042004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.658 [2024-07-11 11:20:38.047282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.658 [2024-07-11 11:20:38.047626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.658 [2024-07-11 11:20:38.047654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.658 [2024-07-11 11:20:38.047670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.658 [2024-07-11 11:20:38.047925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.658 [2024-07-11 11:20:38.048124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.658 [2024-07-11 11:20:38.048143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.658 [2024-07-11 11:20:38.048156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.658 [2024-07-11 11:20:38.051132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.658 [2024-07-11 11:20:38.060727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.658 [2024-07-11 11:20:38.061080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.658 [2024-07-11 11:20:38.061107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.658 [2024-07-11 11:20:38.061122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.658 [2024-07-11 11:20:38.061343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.658 [2024-07-11 11:20:38.061557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.658 [2024-07-11 11:20:38.061576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.658 [2024-07-11 11:20:38.061589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.658 [2024-07-11 11:20:38.064588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.658 [2024-07-11 11:20:38.074115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.658 [2024-07-11 11:20:38.074489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.658 [2024-07-11 11:20:38.074532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.658 [2024-07-11 11:20:38.074547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.658 [2024-07-11 11:20:38.074810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.658 [2024-07-11 11:20:38.075007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.658 [2024-07-11 11:20:38.075026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.658 [2024-07-11 11:20:38.075039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.658 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.658 [2024-07-11 11:20:38.078313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.918 [2024-07-11 11:20:38.087667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.918 [2024-07-11 11:20:38.088054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.918 [2024-07-11 11:20:38.088083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.088099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.088334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.088526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.088545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.088557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.091574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.101029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.101458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.101490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.101507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.101747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.101974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.101994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.102007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.104968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.107952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:23.919 [2024-07-11 11:20:38.114287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.114736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.114787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.114807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.115051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.115260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.115279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.115294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.118269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.127571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.128095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.128131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.128150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.128411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.128607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.128626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.128641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.131621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.140919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.141363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.141390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.141406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.141652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.141909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.141930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.141944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.144950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.154284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.154810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.154843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.154862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.155094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.155306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.155325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.155340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.158287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.167570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.168056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.168091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.168112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.168375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.168571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.168589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.168604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.171577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.180861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.181300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.181329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.181345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.181591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.181842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.181864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.181886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.184845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.193288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.919 [2024-07-11 11:20:38.193319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.919 [2024-07-11 11:20:38.193354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.919 [2024-07-11 11:20:38.193373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.919 [2024-07-11 11:20:38.193388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.919 [2024-07-11 11:20:38.193489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.919 [2024-07-11 11:20:38.193548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.919 [2024-07-11 11:20:38.193555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.919 [2024-07-11 11:20:38.194229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.194626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.194654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.919 [2024-07-11 11:20:38.194670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.919 [2024-07-11 11:20:38.194909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.919 [2024-07-11 11:20:38.195136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.919 [2024-07-11 11:20:38.195156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.919 [2024-07-11 11:20:38.195169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.919 [2024-07-11 11:20:38.198335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.919 [2024-07-11 11:20:38.207666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.919 [2024-07-11 11:20:38.208243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.919 [2024-07-11 11:20:38.208284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.208305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.208557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.208793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.208816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.208833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.211968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.221296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.221768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.221808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.221830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.222064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.222289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.222310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.222326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.225499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.234833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.235412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.235451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.235472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.235724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.235963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.235986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.236003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.239198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.248330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.248733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.248777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.248797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.249048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.249256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.249277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.249293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.252483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.261809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.262388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.262426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.262447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.262698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.262962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.262988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.263006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.266171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.275347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.275770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.275820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.276040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.276259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.276280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.276297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.279500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.288809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.289203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.289232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.289248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.289462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.289688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.289708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.289721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.292971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.302324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.302662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.302689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.302705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.302982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.303212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.303233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.303246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.920 [2024-07-11 11:20:38.306458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 [2024-07-11 11:20:38.315924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.316278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.316306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.316323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.316553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.316803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.316825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.316839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 [2024-07-11 11:20:38.320025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.920 [2024-07-11 11:20:38.327498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.920 [2024-07-11 11:20:38.329459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.920 [2024-07-11 11:20:38.329825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.920 [2024-07-11 11:20:38.329853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:23.920 [2024-07-11 11:20:38.329869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:23.920 [2024-07-11 11:20:38.330098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:23.920 [2024-07-11 11:20:38.330318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:23.920 [2024-07-11 11:20:38.330338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:23.920 [2024-07-11 11:20:38.330351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.920 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.920 [2024-07-11 11:20:38.333596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.181 [2024-07-11 11:20:38.342953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.181 [2024-07-11 11:20:38.343313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.181 [2024-07-11 11:20:38.343356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:24.181 [2024-07-11 11:20:38.343372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:24.181 [2024-07-11 11:20:38.343630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:24.181 [2024-07-11 11:20:38.343879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.181 [2024-07-11 11:20:38.343900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.181 [2024-07-11 11:20:38.343914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.181 [2024-07-11 11:20:38.347161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.181 [2024-07-11 11:20:38.356405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.181 [2024-07-11 11:20:38.356804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.181 [2024-07-11 11:20:38.356834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:24.181 [2024-07-11 11:20:38.356851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:24.181 [2024-07-11 11:20:38.357105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:24.181 [2024-07-11 11:20:38.357311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.181 [2024-07-11 11:20:38.357331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.181 [2024-07-11 11:20:38.357345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.181 [2024-07-11 11:20:38.360518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.181 [2024-07-11 11:20:38.369914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.181 [2024-07-11 11:20:38.370397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.181 [2024-07-11 11:20:38.370434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:24.181 [2024-07-11 11:20:38.370454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:24.181 [2024-07-11 11:20:38.370693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:24.181 [2024-07-11 11:20:38.370946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.181 [2024-07-11 11:20:38.370969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.181 [2024-07-11 11:20:38.370985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.181 Malloc0 00:34:24.181 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.181 11:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.182 [2024-07-11 11:20:38.374283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.182 [2024-07-11 11:20:38.383422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.182 [2024-07-11 11:20:38.383787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.182 [2024-07-11 11:20:38.383816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c69f70 with addr=10.0.0.2, port=4420 00:34:24.182 [2024-07-11 11:20:38.383839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c69f70 is same with the state(5) to be set 00:34:24.182 [2024-07-11 11:20:38.384053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c69f70 (9): Bad file descriptor 00:34:24.182 [2024-07-11 11:20:38.384274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:24.182 [2024-07-11 11:20:38.384295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:24.182 [2024-07-11 11:20:38.384308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:24.182 [2024-07-11 11:20:38.387493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:24.182 [2024-07-11 11:20:38.392017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.182 11:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 402389 00:34:24.182 [2024-07-11 11:20:38.397047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:24.182 [2024-07-11 11:20:38.470788] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:34.161 00:34:34.161 Latency(us) 00:34:34.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:34.161 Verification LBA range: start 0x0 length 0x4000 00:34:34.161 Nvme1n1 : 15.01 6619.58 25.86 10327.19 0.00 7529.45 825.27 16505.36 00:34:34.161 =================================================================================================================== 00:34:34.161 Total : 6619.58 25.86 10327.19 0.00 7529.45 825.27 16505.36 00:34:34.161 11:20:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:34.161 11:20:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:34.162 rmmod nvme_tcp 00:34:34.162 rmmod nvme_fabrics 00:34:34.162 rmmod nvme_keyring 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 403065 ']' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 403065 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 403065 ']' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 403065 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 403065 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 403065' 00:34:34.162 killing process with pid 403065 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 403065 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 403065 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:34.162 11:20:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.542 11:20:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:35.801 00:34:35.801 real 0m22.333s 00:34:35.801 user 0m58.773s 00:34:35.801 sys 0m4.663s 00:34:35.801 11:20:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:35.801 11:20:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:35.801 ************************************ 00:34:35.801 END TEST nvmf_bdevperf 00:34:35.801 ************************************ 00:34:35.801 11:20:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:35.801 11:20:49 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:35.801 11:20:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:35.801 11:20:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:35.801 11:20:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.801 ************************************ 00:34:35.801 START TEST nvmf_target_disconnect 00:34:35.801 ************************************ 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:35.801 * Looking for test storage... 00:34:35.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.801 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:35.802 11:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:38.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:38.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:38.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:38.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:38.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:34:38.336 00:34:38.336 --- 10.0.0.2 ping statistics --- 00:34:38.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.336 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:38.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:34:38.336 00:34:38.336 --- 10.0.0.1 ping statistics --- 00:34:38.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.336 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:38.336 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:38.336 ************************************ 00:34:38.336 START TEST nvmf_target_disconnect_tc1 00:34:38.336 ************************************ 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.337 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.337 [2024-07-11 11:20:52.433298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:38.337 [2024-07-11 11:20:52.433371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2319590 with addr=10.0.0.2, port=4420 00:34:38.337 [2024-07-11 11:20:52.433409] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:38.337 [2024-07-11 11:20:52.433441] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:38.337 [2024-07-11 11:20:52.433455] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:38.337 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:38.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:38.337 Initializing NVMe Controllers 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:38.337 00:34:38.337 real 0m0.094s 00:34:38.337 user 0m0.038s 00:34:38.337 sys 0m0.056s 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:38.337 ************************************ 00:34:38.337 END TEST nvmf_target_disconnect_tc1 00:34:38.337 ************************************ 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:38.337 ************************************ 00:34:38.337 START TEST nvmf_target_disconnect_tc2 00:34:38.337 ************************************ 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=406199 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 406199 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 406199 ']' 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:38.337 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.337 [2024-07-11 11:20:52.546500] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:38.337 [2024-07-11 11:20:52.546586] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.337 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.337 [2024-07-11 11:20:52.610839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:38.337 [2024-07-11 11:20:52.697154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.337 [2024-07-11 11:20:52.697218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.337 [2024-07-11 11:20:52.697247] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.337 [2024-07-11 11:20:52.697259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.337 [2024-07-11 11:20:52.697268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.337 [2024-07-11 11:20:52.697350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:38.337 [2024-07-11 11:20:52.697412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:38.337 [2024-07-11 11:20:52.697478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:38.337 [2024-07-11 11:20:52.697480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:38.597 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.598 Malloc0 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.598 [2024-07-11 11:20:52.871215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.598 [2024-07-11 11:20:52.899454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=406297 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:38.598 11:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:38.598 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.497 11:20:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 406199 00:34:40.497 11:20:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 [2024-07-11 11:20:54.924100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 [2024-07-11 11:20:54.924474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Write completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.780 starting I/O failed 00:34:40.780 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 [2024-07-11 11:20:54.924807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.781 [2024-07-11 11:20:54.924956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.924988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.925110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.925138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.925259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.925286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.925428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.925455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.925568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.925595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.925704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.925730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Read completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 Write completed with error (sct=0, sc=8) 00:34:40.781 starting I/O failed 00:34:40.781 [2024-07-11 11:20:54.926031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:40.781 [2024-07-11 11:20:54.926235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.926269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.926387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.926420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.926535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.926562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.926674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.926702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.926802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.926830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.926954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.926982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.927118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.927145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.927271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.927298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.927495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.927537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.927660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.927688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.927826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.927854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.927936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.927964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.928082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.928123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.928229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.928370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.928397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.781 qpair failed and we were unable to recover it. 00:34:40.781 [2024-07-11 11:20:54.928513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.781 [2024-07-11 11:20:54.928540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.928635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.928662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.928768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.928798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.928918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.928945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.929089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.929265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.929383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.929520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.929690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.929855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.929984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.930907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.930992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.931131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.931298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.931439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.931592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.931775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.931920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.931946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.932057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.932198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.932314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.932480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.932682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.932879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.932975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.933129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.933331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.933468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.933637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.933788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.933908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.933934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.934062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.934089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.934207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.934233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.934343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.934369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.782 [2024-07-11 11:20:54.934484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.782 [2024-07-11 11:20:54.934511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.782 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.934620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.934648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.934792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.934832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.934955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.934983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.935880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.935908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.936902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.936929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.937022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.937060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.937224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.937283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.937511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.937558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.937672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.937701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.937818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.937845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.937933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.937959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.938894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.938922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.939955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.939982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.940085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.940120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.940258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.940284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.940392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.940422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.783 qpair failed and we were unable to recover it. 00:34:40.783 [2024-07-11 11:20:54.940543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.783 [2024-07-11 11:20:54.940573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.940686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.940713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.940829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.940856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.940941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.940967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.941951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.941977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.942958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.942997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.943145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.943292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.943442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.943613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.943766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.943893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.943982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.944881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.944982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.945022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.945126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.945153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.945268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.784 [2024-07-11 11:20:54.945294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.784 qpair failed and we were unable to recover it. 00:34:40.784 [2024-07-11 11:20:54.945379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.945405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.945507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.945546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.945661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.945689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.945799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.945829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.945938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.945965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.946079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.946111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.946222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.946283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.946495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.946547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.946666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.946696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.946836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.946865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.946960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.946987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.947115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.947170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.947268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.947341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.947507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.947566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.947682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.947708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.947810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.947838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.947933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.947960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.948125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.948264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.948417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.948541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.948717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.948883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.948994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.949142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.949249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.949391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.949579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.949741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.949903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.949930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.950900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.950926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.951044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.951070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.951156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.951183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.785 [2024-07-11 11:20:54.951266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.785 [2024-07-11 11:20:54.951293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.785 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.951434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.951461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.951600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.951627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.951768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.951809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.951926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.951954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.952049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.952081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.952233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.952260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.952443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.952471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.952580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.952607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.952762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.952791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.952902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.952928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.953917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.953945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.954087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.954205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.954418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.954559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.954678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.954854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.954980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.955134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.955249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.955388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.955542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.955706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.955862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.955891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.956938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.956966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.957105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.957173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.957289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.957316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.957434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.957460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.786 qpair failed and we were unable to recover it. 00:34:40.786 [2024-07-11 11:20:54.957578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.786 [2024-07-11 11:20:54.957606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.957690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.957717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.957869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.957899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.958963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.958989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.959927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.959966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.960967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.960994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.961901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.961932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.787 qpair failed and we were unable to recover it. 00:34:40.787 [2024-07-11 11:20:54.962957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.787 [2024-07-11 11:20:54.962984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.963940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.963967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.964888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.964915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.965112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.965139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.965280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.965306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.965415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.965446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.965587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.965613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.965739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.965793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.965878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.965906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.966024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.966051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.966168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.966194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.966365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.966391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.966612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.966639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.966744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.966777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.788 [2024-07-11 11:20:54.966860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.788 [2024-07-11 11:20:54.966889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.788 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.967971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.967997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.968120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.968146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.968283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.968343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.968475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.968520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.968628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.968654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.968762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.968788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.968894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.968934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.969951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.969978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.789 qpair failed and we were unable to recover it. 00:34:40.789 [2024-07-11 11:20:54.970092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.789 [2024-07-11 11:20:54.970118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.970230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.970257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.970403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.970429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.970541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.970568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.970720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.970750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.970872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.970898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.970988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.971129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.971242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.971408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.971516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.971661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.971865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.971905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.972872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.972899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.790 [2024-07-11 11:20:54.973946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.790 [2024-07-11 11:20:54.973973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.790 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.974143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.974284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.974391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.974614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.974723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.974866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.974984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.975010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.975172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.975222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.975436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.975491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.975632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.975665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.975810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.975851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.975973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.976145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.976324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.976490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.976625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.976791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.976907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.976934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.977916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.977942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.978055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.791 [2024-07-11 11:20:54.978081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.791 qpair failed and we were unable to recover it. 00:34:40.791 [2024-07-11 11:20:54.978187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.978214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.978317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.978342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.978460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.978486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.978601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.978630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.978770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.978798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.978881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.978908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.978997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.979877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.979993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.980926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.980953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.981075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.981244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.981421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.981559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.981696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.792 [2024-07-11 11:20:54.981804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.792 qpair failed and we were unable to recover it. 00:34:40.792 [2024-07-11 11:20:54.981898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.981924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.982958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.982986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.983077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.983104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.983324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.983382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.983510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.983537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.983678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.983705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.983819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.983846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.983985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.984137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.984280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.984444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.984585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.984708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.984871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.984898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.985957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.985984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.793 qpair failed and we were unable to recover it. 00:34:40.793 [2024-07-11 11:20:54.986094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.793 [2024-07-11 11:20:54.986121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.986200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.986226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.986390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.986443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.986554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.986581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.986689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.986716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.986828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.986855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.986945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.986971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.987908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.987993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.988116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.988256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.988422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.988530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.988684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.988874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.988902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.989038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.989064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.989202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.989450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.989510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.989635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.989662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.989806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.989836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.989929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.989956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.794 [2024-07-11 11:20:54.990070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.794 [2024-07-11 11:20:54.990096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.794 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.990178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.990204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.990312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.990377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.990490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.990517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.990628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.990653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.990763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.990790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.990898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.990924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.991896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.991922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.992922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.992948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.993084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.993224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.993341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.993481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.993655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.795 [2024-07-11 11:20:54.993825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.795 qpair failed and we were unable to recover it. 00:34:40.795 [2024-07-11 11:20:54.993947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.993975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.994115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.994283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.994396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.994536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.994707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.994899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.994987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.995119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.995332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.995523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.995681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.995841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.995959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.995985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.996204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.996254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.996468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.996521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.996657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.996684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.996795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.996823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.996966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.996994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.997078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.997105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.997222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.997249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.997330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.796 [2024-07-11 11:20:54.997357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.796 qpair failed and we were unable to recover it. 00:34:40.796 [2024-07-11 11:20:54.997448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.997476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.997587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.997613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.997757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.997784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.997892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.997918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.998901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.998982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.999116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.999258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.999418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.999550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.999680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:54.999839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:54.999879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.000908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.000986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.001012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.001088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.797 [2024-07-11 11:20:55.001114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.797 qpair failed and we were unable to recover it. 00:34:40.797 [2024-07-11 11:20:55.001236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.001263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.001398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.001425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.001532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.001558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.001642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.001669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.001792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.001820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.001963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.001989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.002074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.002104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.002228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.002268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.002394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.002423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.002542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.002568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.002689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.002715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.002869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.002896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.003144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.003319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.003488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.003603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.003760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.003898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.003987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.004013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.004173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.004225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.004413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.004440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.004530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.004556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.004692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.004732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.004833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.004862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.004978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.005006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.005149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.005176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.005317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.798 [2024-07-11 11:20:55.005344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.798 qpair failed and we were unable to recover it. 00:34:40.798 [2024-07-11 11:20:55.005470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.005496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.005608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.005636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.005771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.005813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.005909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.005938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.006053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.006080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.006237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.006297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.006520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.006570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.006686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.006713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.006840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.006868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.006988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.007121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.007260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.007442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.007608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.007748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.007879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.007906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.008020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.008048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.008215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.008267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.008397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.008443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.008573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.008602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.008744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.008779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.008890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.008917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.009026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.009052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.009161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.009188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.009306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.009335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.009472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.009499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.009586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.799 [2024-07-11 11:20:55.009617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.799 qpair failed and we were unable to recover it. 00:34:40.799 [2024-07-11 11:20:55.009762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.009789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.009903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.009929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.010897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.010923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.011948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.011988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.012215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.012266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.012488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.012543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.012653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.012680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.012796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.012824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.012938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.012965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.013114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.013268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.013381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.013527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.013659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.800 [2024-07-11 11:20:55.013798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.800 qpair failed and we were unable to recover it. 00:34:40.800 [2024-07-11 11:20:55.013907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.013947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.014877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.014991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.015950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.015978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.016101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.016128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.016299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.016360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.016484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.016525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.016644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.016672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.016776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.016805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.016914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.016941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.017051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.017078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.017166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.017194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.017312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.017340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.017483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.017510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.017632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.801 [2024-07-11 11:20:55.017661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.801 qpair failed and we were unable to recover it. 00:34:40.801 [2024-07-11 11:20:55.017779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.017806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.017891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.017918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.017994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.018909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.018991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.019168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.019404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.019542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.019679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.019801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.019955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.019982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.020081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.020108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.020222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.020249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.020393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.020420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.020562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.020591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.020702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.802 [2024-07-11 11:20:55.020728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.802 qpair failed and we were unable to recover it. 00:34:40.802 [2024-07-11 11:20:55.020852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.020879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.020961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.020988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.021153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.021290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.021433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.021579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.021761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.021900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.021982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.022872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.022987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.023015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.023098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.023124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.023337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.023364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.023545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.023605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.023726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.023771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.023884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.023911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.024932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.803 [2024-07-11 11:20:55.024960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.803 qpair failed and we were unable to recover it. 00:34:40.803 [2024-07-11 11:20:55.025073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.025188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.025302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.025446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.025595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.025736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.025884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.025910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.026890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.026984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.027938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.027963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.028116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.028143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.028254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.028279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.028401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.028427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.028512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.028539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.028669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.028709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.804 qpair failed and we were unable to recover it. 00:34:40.804 [2024-07-11 11:20:55.028842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.804 [2024-07-11 11:20:55.028871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.028988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.029129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.029247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.029389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.029533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.029695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.029955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.029995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.030120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.030148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.030238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.030265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.030380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.030407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.030592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.030619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.030761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.030894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.030922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.031061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.031088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.031256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.031308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.031440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.031491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.031615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.031644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.031774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.031803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.031888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.031915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.032876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.032905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.805 [2024-07-11 11:20:55.033019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.805 [2024-07-11 11:20:55.033047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.805 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.033187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.033307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.033452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.033596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.033722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.033869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.033982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.034008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.034150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.034176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.034364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.034390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.034611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.034665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.034762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.034790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.034871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.034899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.035903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.036911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.036937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.806 [2024-07-11 11:20:55.037053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.806 [2024-07-11 11:20:55.037080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.806 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.037972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.037999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.038134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.038298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.038432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.038582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.038763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.038879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.038994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.039020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.039131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.039158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.039295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.039499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.039552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.039691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.039718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.039876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.039916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.040908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.040934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.041039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.807 [2024-07-11 11:20:55.041065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.807 qpair failed and we were unable to recover it. 00:34:40.807 [2024-07-11 11:20:55.041202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.041228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.041412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.041438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.041590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.041616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.041702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.041728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.041843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.041884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.042020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.042061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.042299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.042349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.042516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.042568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.042646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.042673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.042796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.042824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.042936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.042963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.043913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.043941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.044024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.044061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.044172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.044199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.044291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.044321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.044406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.044433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.044526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.044552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.808 qpair failed and we were unable to recover it. 00:34:40.808 [2024-07-11 11:20:55.044634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.808 [2024-07-11 11:20:55.044660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.044775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.044803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.044940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.044966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.045101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.045271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.045465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.045587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.045723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.045847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.045988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.046014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.046141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.046194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.046370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.046431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.046657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.046718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.046844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.046872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.046985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.047012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.047122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.047149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.047328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.047372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.047590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.047646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.047768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.047796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.047896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.048057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.048085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.048196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.048222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.048363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.048390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.048482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.048508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.048639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.048679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.809 qpair failed and we were unable to recover it. 00:34:40.809 [2024-07-11 11:20:55.048798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.809 [2024-07-11 11:20:55.048827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.048970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.048997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.049176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.049246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.049411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.049461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.049599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.049625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.049712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.049738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.049895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.049921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.050929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.050956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.051103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.051130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.051251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.051277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.051394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.051422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.051519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.051559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.051713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.051767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.051885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.051913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.052028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.052065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.052277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.052335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.052512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.052563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.052677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.052704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.052846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.052873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.052987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.053013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.053102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.053130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.053267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.810 [2024-07-11 11:20:55.053295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.810 qpair failed and we were unable to recover it. 00:34:40.810 [2024-07-11 11:20:55.053388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.053416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.053532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.053558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.053696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.053736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.053831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.053859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.054022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.054049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.054192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.054219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.054406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.054468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.054587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.054615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.054722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.054769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.054896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.054923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.055036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.055176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.055389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.055610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.055727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.055882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.055997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.056160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.056330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.056467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.056607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.056761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.056900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.056926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.057060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.057087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.057203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.057230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.057445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.057501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.057615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.057641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.811 qpair failed and we were unable to recover it. 00:34:40.811 [2024-07-11 11:20:55.057793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.811 [2024-07-11 11:20:55.057822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.057938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.057965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.058882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.058908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.059903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.059932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.060962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.060988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.061074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.061101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.061216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.061351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.061378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.061486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.812 [2024-07-11 11:20:55.061512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.812 qpair failed and we were unable to recover it. 00:34:40.812 [2024-07-11 11:20:55.061629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.061656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.061740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.061778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.061903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.061931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.062900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.062926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.063915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.063942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.064071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.064098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.064214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.064241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.064382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.064409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.064520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.064547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.064630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.064657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.813 [2024-07-11 11:20:55.064750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.813 [2024-07-11 11:20:55.064783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.813 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.064870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.064897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.065009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.065035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.065144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.065170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.065390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.065453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.065601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.065627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.065712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.065739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.065887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.065914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.066879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.066977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.067919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.067946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.068031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.068065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.068147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.068175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.068260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.814 [2024-07-11 11:20:55.068287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.814 qpair failed and we were unable to recover it. 00:34:40.814 [2024-07-11 11:20:55.068432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.068461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.068552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.068582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.068724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.068760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.068842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.068869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.068983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.069131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.069297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.069408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.069559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.069680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.069794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.069822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.070000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.070063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.070222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.070276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.070465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.070493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.070639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.070666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.070780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.070809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.070964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.071015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.071183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.071242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.071416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.815 [2024-07-11 11:20:55.071442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.815 qpair failed and we were unable to recover it. 00:34:40.815 [2024-07-11 11:20:55.071519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.071546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.071663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.071689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.071840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.071869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.072039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.072112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.072345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.072399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.072625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.072652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.072795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.072822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.072907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.072934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.073891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.073932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.074050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.074172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.074314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.074590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.074701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.074878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.074999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.075030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.075173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.075200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.075312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.075339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.075480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.816 [2024-07-11 11:20:55.075507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.816 qpair failed and we were unable to recover it. 00:34:40.816 [2024-07-11 11:20:55.075622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.075649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.075739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.075776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.075870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.075898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.075989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.076920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.076947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.077915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.077955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.078043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.078076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.078202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.078253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.078395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.078458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.078572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.078598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.078740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.078773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.078912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.078939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.079050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.079076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.817 qpair failed and we were unable to recover it. 00:34:40.817 [2024-07-11 11:20:55.079182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.817 [2024-07-11 11:20:55.079209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.079351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.079378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.079519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.079545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.079690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.079717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.079850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.079890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.080034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.080266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.080293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.080461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.080524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.080638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.080665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.080749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.080786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.080908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.080934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.081075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.081330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.081438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.081604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.081746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.081866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.081983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.082010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.082150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.082177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.082318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.082345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.082459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.082486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.082617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.082656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.082803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.818 [2024-07-11 11:20:55.082832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.818 qpair failed and we were unable to recover it. 00:34:40.818 [2024-07-11 11:20:55.082973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.083000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.083169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.083231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.083445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.083496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.083607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.083633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.083720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.083763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.083882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.083909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.084912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.084940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.085062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.085107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.085236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.085299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.085493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.085520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.085635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.085662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.085775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.085803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.085926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.085953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.086845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.819 [2024-07-11 11:20:55.086873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.819 qpair failed and we were unable to recover it. 00:34:40.819 [2024-07-11 11:20:55.087048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.087075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.087193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.087220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.087333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.087360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.087486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.087516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.087633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.087660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.087809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.087837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.088025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.088064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.088216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.088265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.088487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.088538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.088651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.088678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.088775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.088803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.088927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.088955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.089872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.089988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.090015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.090104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.090131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.090228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.090268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.090445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.090500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.090611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.090638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.820 [2024-07-11 11:20:55.090781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.820 [2024-07-11 11:20:55.090813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.820 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.090979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.091033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.091205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.091261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.091418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.091480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.091566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.091592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.091707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.091733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.091833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.091860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.092018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.092070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.092254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.092280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.092448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.092499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.092639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.092666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.092781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.092809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.092925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.092952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.093123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.093172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.093341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.093405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.093521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.093549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.093663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.093690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.093798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.093826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.093907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.093933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.094883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.821 [2024-07-11 11:20:55.094910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.821 qpair failed and we were unable to recover it. 00:34:40.821 [2024-07-11 11:20:55.095061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.095254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.095374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.095486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.095625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.095744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.095864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.095893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.096943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.096969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.097083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.097146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.097327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.097394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.097586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.097613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.097815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.097842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.097982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.098010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.098097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.098125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.098271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.098327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-11 11:20:55.098442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.822 [2024-07-11 11:20:55.098469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.098578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.098605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.098685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.098711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.098826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.098937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.098963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.099915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.099941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.100883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.100999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.101025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.101200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.101297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.101563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.101630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.101827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.101855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.101969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.101996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.102126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.102191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.102471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.102535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.102692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.102717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.102856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.823 [2024-07-11 11:20:55.102896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-11 11:20:55.102997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.103025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.103168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.103195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.103331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.103359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.103573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.103629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.103751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.103785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.103927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.103954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.104155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.104208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.104315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.104342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.104518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.104567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.104673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.104700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.104809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.104835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.104922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.104950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.105943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.105970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.106147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.106173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.106448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.106513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.106717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.106743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.106839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.106865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.106977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.107003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.108763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.108819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.108969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.108994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.109137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.109163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-11 11:20:55.109321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.824 [2024-07-11 11:20:55.109351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.109539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.109569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.109710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.109736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.109844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.109870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.109983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.110117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.110287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.110465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.110646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.110751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.110928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.110955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.111967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.111993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.112951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.112977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.113101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.113128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.113234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.113264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.113420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.113450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.113560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.113586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.113704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.825 [2024-07-11 11:20:55.113733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.825 qpair failed and we were unable to recover it. 00:34:40.825 [2024-07-11 11:20:55.113859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.113898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.114898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.114924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.115940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.115967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.116162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.116347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.116462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.116604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.116762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.116880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.116998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.117163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.117302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.117423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.117562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.117697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.117873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.117913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.118031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.118068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.118185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.826 [2024-07-11 11:20:55.118213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.826 qpair failed and we were unable to recover it. 00:34:40.826 [2024-07-11 11:20:55.118354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.118381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.118520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.118546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.118635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.118662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.118748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.118784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.118881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.118907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.118995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.119938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.119965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.120944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.120970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.121077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.121213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.121372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.121532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.121815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.827 qpair failed and we were unable to recover it. 00:34:40.827 [2024-07-11 11:20:55.121975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.827 [2024-07-11 11:20:55.122005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.122179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.122238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.122335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.122365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.122456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.122483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.122579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.122606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.122718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.122762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.122892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.122937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.123881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.123908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.124966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.124992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.125135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.828 [2024-07-11 11:20:55.125163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.828 qpair failed and we were unable to recover it. 00:34:40.828 [2024-07-11 11:20:55.125273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.125300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.125421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.125447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.125530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.125557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.125646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.125676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.125780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.125808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.125919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.125945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.126904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.126931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.127863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.127978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.128884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.128983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.129009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.829 [2024-07-11 11:20:55.129158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.829 [2024-07-11 11:20:55.129184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.829 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.129310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.129337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.129462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.129489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.129588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.129614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.129712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.129738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.129864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.129891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.129977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.130936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.130963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.131905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.131931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.132906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.132933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.830 [2024-07-11 11:20:55.133016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.830 [2024-07-11 11:20:55.133053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.830 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.133200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.133228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.133349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.133376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.133567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.133594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.133676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.133703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.133800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.133828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.133918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.133945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.134902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.134929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.135951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.135988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.136964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.136991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.137109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.137326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.137353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.137471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.137497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.831 [2024-07-11 11:20:55.137641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.831 [2024-07-11 11:20:55.137668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.831 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.137784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.137823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.137949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.137976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.138066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.138092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.138184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.138211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.138329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.138384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.138602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.138650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.138777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.138826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.138950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.138983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.139969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.139995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.140166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.140198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.140319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.140351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.140487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.140540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.140693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.140719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.140824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.140870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.140984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.141013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.141174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.141220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.141401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.141448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.141590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.141617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.141698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.141725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.141868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.141918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.142120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.142166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.142266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.142314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.142412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.142440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.142559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.142586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.832 [2024-07-11 11:20:55.142673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.832 [2024-07-11 11:20:55.142700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.832 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.142789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.142828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.143958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.143985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.144856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.144882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.145952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.145979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.146140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.146302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.146447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.146623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.146797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.146910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.146995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.147027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.147152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.147179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.147299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.147326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.147415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.833 [2024-07-11 11:20:55.147442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.833 qpair failed and we were unable to recover it. 00:34:40.833 [2024-07-11 11:20:55.147562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.147589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.147684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.147711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.147806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.147833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.147924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.147951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.148075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa5b0 is same with the state(5) to be set 00:34:40.834 [2024-07-11 11:20:55.148335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.148375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.148495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.148643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.148764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.148907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.148997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.149141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.149396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.149568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.149681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.149810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.149928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.149955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.150963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.150990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.151168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.151200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.151317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.151343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.151541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.151605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.834 [2024-07-11 11:20:55.151853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.834 [2024-07-11 11:20:55.151879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.834 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.151975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.152002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.152172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.152218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.152329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.152363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.152539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.152572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.152759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.152818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.152915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.152943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.153047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.153081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.153192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.153225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.153342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.153375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.153510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.153542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.153772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.153821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.153917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.153943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.154952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.154979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.155069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.155096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.155242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.155277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.155421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.155474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.155619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.155668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.155829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.155856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.155937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.155964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.156080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.156106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.156227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.156261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.156423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.156457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.156624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.156658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.156793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.156820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.156907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.156955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.157071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.157103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.157272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.157321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.157467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.835 [2024-07-11 11:20:55.157501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.835 qpair failed and we were unable to recover it. 00:34:40.835 [2024-07-11 11:20:55.157645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.157696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.157836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.157864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.157987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.158020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.158185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.158234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.158345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.158394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.158585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.158650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.158750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.158786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.158889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.158917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.158992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.159020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.159220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.159254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.159405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.159452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.159567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.159594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.159722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.159750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.159964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.160189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.160359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.160503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.160623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.160766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.160891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.160918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.161848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.161955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.162880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.162990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.836 [2024-07-11 11:20:55.163025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.836 qpair failed and we were unable to recover it. 00:34:40.836 [2024-07-11 11:20:55.163200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.163235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.163379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.163428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.163567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.163594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.163704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.163731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.163852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.163891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.163988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.164134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.164277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.164427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.164584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.164769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.164924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.164972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.165144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.165189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.165359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.165409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.165507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.165534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.165651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.165677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.165769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.165802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.165899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.165926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.166877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.166904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.167022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.167194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.167227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.167407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.167441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.167593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.167620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.167739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.167772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.167874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.167901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.168010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.168044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.168214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.168247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.168393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.168426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.168598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.168630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.168740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.837 [2024-07-11 11:20:55.168880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.837 [2024-07-11 11:20:55.168907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.837 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.168994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.169875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.169964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.170011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.170117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.170150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.170304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.170342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.170481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.170516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.170647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.170685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.170831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.170873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.170995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.171044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.171196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.171245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.171360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.171407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.171573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.171625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.171743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.171776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.171900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.171933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.172073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.172106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.172216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.172249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.172441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.172494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.172658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.172691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.172828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.172856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.172942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.172970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.173956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.173983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.174097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.174124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.174277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.174323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.174476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.174510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.838 [2024-07-11 11:20:55.174613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.838 [2024-07-11 11:20:55.174647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.838 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.174799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.174827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.174911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.174938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.175953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.175980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.176072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.176120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.176263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.176309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.176422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.176465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.176594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.176627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.176761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.176811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.176894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.176922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.177883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.177910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.178937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.178963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.839 qpair failed and we were unable to recover it. 00:34:40.839 [2024-07-11 11:20:55.179082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.839 [2024-07-11 11:20:55.179108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.179243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.179383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.179416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.179578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.179610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.179721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.179746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.179843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.179869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.179953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.179979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.180072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.180097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.180233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:40.840 [2024-07-11 11:20:55.180265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:40.840 qpair failed and we were unable to recover it. 00:34:40.840 [2024-07-11 11:20:55.180394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.180436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.181503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.181555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.181697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.181727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.182526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.182575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.182709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.182737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.182858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.182885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.182973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.183906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.183999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.184957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.184985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.185926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.185956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.186125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.186283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.186435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.186569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.186714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.186857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.186964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.187000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.187167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.187205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.187316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.187350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.187475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.187509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.187631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.127 [2024-07-11 11:20:55.187659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.127 qpair failed and we were unable to recover it. 00:34:41.127 [2024-07-11 11:20:55.187747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.187783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.187878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.187907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.188033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.188069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.188227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.188264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.188364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.188408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.188556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.188590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.188730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.188774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.188908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.188939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.189094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.189140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.189274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.189308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.189460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.189595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.189642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.189769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.189809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.189916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.189943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.190032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.190068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.190168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.190208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.190387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.190421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.190564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.190600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.190713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.190760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.190888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.190916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.191966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.191994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.192952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.192980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.193157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.193185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.193311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.193357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.193464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.193498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.193604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.193650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.193758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.193786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.193905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.193933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.194070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.194119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.194249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.194281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.194451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.194486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.194597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.194645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.194835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.194879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.195011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.195046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.128 [2024-07-11 11:20:55.195170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.128 [2024-07-11 11:20:55.195225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.128 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.195358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.195408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.195511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.195539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.195666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.195694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.195820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.195849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.195969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.195998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.196162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.196190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.196290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.196329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.196470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.196498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.196592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.196621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.196759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.196788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.196884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.196913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.197037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.197230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.197379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.197514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.197691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.197857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.197982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.198010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.198141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.198175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.198311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.198345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.198506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.198539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.198682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.198709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.198819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.198847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.198974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.199150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.199310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.199459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.199615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.199768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.199899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.199927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.200017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.200056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.200170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.200199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.200390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.200424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.200540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.200589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.200775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.200823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.200956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.200984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.201089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.201123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.201294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.201328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.201475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.201509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.201649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.201683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.201800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.201853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.201955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.201983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.202087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.202144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.202286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.202325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.129 qpair failed and we were unable to recover it. 00:34:41.129 [2024-07-11 11:20:55.202437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.129 [2024-07-11 11:20:55.202464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.202630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.202666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.202867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.202910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.203014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.203045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.203193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.203231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.203382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.203418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.203589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.203623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.203770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.203813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.203993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.204043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.204196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.204241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.204389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.204437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.204562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.204591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.204719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.204748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.204868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.204897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.204987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.205141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.205265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.205417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.205571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.205721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.205890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.205919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.206895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.206943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.207865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.207899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.208030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.208058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.208189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.208218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.208330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.208358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.208501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.208552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.208672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.208701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.130 [2024-07-11 11:20:55.208877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.130 [2024-07-11 11:20:55.208930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.130 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.209072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.209121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.209268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.209316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.209411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.209440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.209565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.209594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.209737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.209893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.209941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.210970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.210999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.211151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.211180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.211300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.211329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.211473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.211501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.211601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.211630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.211771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.211809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.211952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.212187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.212339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.212453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.212597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.212763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.212930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.212959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.213895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.213923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.214926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.214955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.215079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.215109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.215212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.215241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.215356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.215385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.131 qpair failed and we were unable to recover it. 00:34:41.131 [2024-07-11 11:20:55.215506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.131 [2024-07-11 11:20:55.215535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.215647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.215677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.215773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.215803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.215888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.215918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.216857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.216901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.217873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.217991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.218160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.218376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.218520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.218669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.218817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.218934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.218963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.219053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.219084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.219344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.219398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.219504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.219533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.219657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.219686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.219813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.219843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.219996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.220119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.220273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.220418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.220540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.220687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.220848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.220880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.221052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.221087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.221322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.221357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.221494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.221529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.221656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.221685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.221803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.221833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.221969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.222003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.222143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.222178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.222289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.132 [2024-07-11 11:20:55.222324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.132 qpair failed and we were unable to recover it. 00:34:41.132 [2024-07-11 11:20:55.222459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.222494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.222601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.222635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.222758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.222788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.222892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.222921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.223059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.223094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.223278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.223313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.223456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.223499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.223625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.223659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.223774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.223803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.223904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.223932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.224050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.224079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.224227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.224262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.224441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.224476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.224601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.224636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.224763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.224793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.224911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.224939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.225059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.225224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.225382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.225539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.225713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.225879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.225996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.226161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.226311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.226456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.226645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.226781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.226907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.226935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.227101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.227260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.227382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.227533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.227696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.227853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.227970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.228111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.228235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.228389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.228578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.228720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.228895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.228925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.229018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.229067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.229212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.229247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.229351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.229387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.133 [2024-07-11 11:20:55.229511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.133 [2024-07-11 11:20:55.229547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.133 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.229721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.229764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.229911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.229960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.230104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.230138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.230256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.230291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.230429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.230472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.230572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.230607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.230759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.230799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.230912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.230942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.231115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.231150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.231292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.231342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.231450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.231486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.231598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.231651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.231745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.231782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.231932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.231961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.232084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.232113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.232206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.232234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.232408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.232443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.232590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.232626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.232760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.232809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.232936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.232965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.233084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.233117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.233243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.233271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.233478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.233518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.233681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.233731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.233918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.233947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.234093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.234121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.234218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.234248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.234408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.234450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.234601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.234653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.234788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.234819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.235011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.235062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.235211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.235274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.235369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.235397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.235516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.235545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.235672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.235699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.235858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.235913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.236002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.236031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.236235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.236289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.134 qpair failed and we were unable to recover it. 00:34:41.134 [2024-07-11 11:20:55.236413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.134 [2024-07-11 11:20:55.236441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.236571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.236600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.236748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.236804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.236950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.236999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.237201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.237359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.237511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.237662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.237787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.237908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.237997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.238123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.238272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.238395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.238520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.238695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.238853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.238896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.239933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.239968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.240086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.240122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.240255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.240290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.240407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.240441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.240557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.240593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.240767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.240797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.240931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.240961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.241065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.241094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.241253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.241289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.241403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.241553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.241589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.241778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.241808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.241906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.241935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.242067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.242096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.242263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.242298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.242465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.242517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.242731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.242828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.242951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.242980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.243073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.243125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.243241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.243276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.243422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.243457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.135 qpair failed and we were unable to recover it. 00:34:41.135 [2024-07-11 11:20:55.243598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.135 [2024-07-11 11:20:55.243633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.243751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.243794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.243905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.243936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.244135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.244186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.244322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.244357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.244509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.244546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.244684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.244713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.244822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.244852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.244937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.244997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.245158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.245211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.245443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.245495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.245652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.245681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.245779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.245812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.245929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.245958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.246069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.246103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.246250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.246284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.246387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.246421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.246544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.246585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.246729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.246770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.246911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.246940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.247071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.247110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.247316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.247351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.247490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.247524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.247666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.247700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.247860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.247888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.247985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.248013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.248116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.248145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.248283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.248316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.248457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.248491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.248631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.248668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.248835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.248879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.249061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.249216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.249449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.249597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.249744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.249896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.249985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.250014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.250143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.250173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.250303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.250333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.136 [2024-07-11 11:20:55.250419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.136 [2024-07-11 11:20:55.250448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.136 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.250572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.250601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.250695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.250723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.250819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.250846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.250967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.250995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.251210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.251251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.251408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.251442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.251554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.251588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.251725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.251760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.251910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.251938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.252099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.252273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.252416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.252574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.252760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.252902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.252997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.253042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.253146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.253180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.253320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.253354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.253510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.253546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.253683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.253726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.253852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.253895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.254001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.254032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.254170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.254199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.254353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.254388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.254533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.254569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.254734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.254769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.254891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.254919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.255046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.255098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.255235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.255270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.255434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.255469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.255611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.255659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.255782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.255814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.255939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.255968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.256062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.256088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.256222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.256255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.256372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.256416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.256558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.256593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.256738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.256786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.256913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.256941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.257069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.257109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.257263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.257313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.257472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.137 [2024-07-11 11:20:55.257516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.137 qpair failed and we were unable to recover it. 00:34:41.137 [2024-07-11 11:20:55.257669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.257703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.257840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.257869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.258014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.258054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.258157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.258205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.258326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.258360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.258529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.258564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.258707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.258744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.258871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.258899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.259027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.259096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.259266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.259317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.259467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.259512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.259611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.259640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.259765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.259795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.259909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.259943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.260158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.260307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.260465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.260581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.260730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.260888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.260990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.261022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.261157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.261186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.261313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.261342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.261430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.261485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.261663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.261732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.261975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.262011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.262179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.262230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.262362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.262397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.262602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.262653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.262781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.262827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.262923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.262952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.263094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.263142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.263314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.263363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.263456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.263485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.263588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.263615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.263711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.263739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.263860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.263903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.264035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.264081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.264177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.264229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.138 qpair failed and we were unable to recover it. 00:34:41.138 [2024-07-11 11:20:55.264344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.138 [2024-07-11 11:20:55.264378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.264548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.264582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.264772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.264844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.265007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.265048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.265304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.265361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.265524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.265575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.265744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.265778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.265883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.265913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.266006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.266036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.266208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.266254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.266428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.266475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.266619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.266667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.266837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.266868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.266960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.266990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.267088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.267118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.267250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.267298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.267473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.267521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.267722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.267810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.267902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.267931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.268018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.268048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.268208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.268254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.268451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.268500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.268701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.268771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.268902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.268931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.269082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.269111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.269204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.269264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.269450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.269497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.269727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.269789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.269955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.269984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.270100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.270129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.270237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.270308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.270477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.270528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.270738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.270775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.270933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.270965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.271133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.271191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.271343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.271396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.271523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.271575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.271721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.271749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.271856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.271886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.272898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.272986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.273016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.273098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.273127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.273250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.273279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.273376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.273406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.273527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.139 [2024-07-11 11:20:55.273556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.139 qpair failed and we were unable to recover it. 00:34:41.139 [2024-07-11 11:20:55.273648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.273680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.273792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.273821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.273954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.274152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.274326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.274470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.274588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.274738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.274868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.274895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.275064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.275110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.275305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.275341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.275487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.275522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.275640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.275669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.275767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.275800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.275895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.275924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.276057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.276101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.276275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.276319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.276448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.276492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.276663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.276706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.276877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.276910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.277063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.277106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.277256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.277306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.277510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.277560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.277762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.277819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.277937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.277966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.278096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.278125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.278230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.278259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.278397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.278427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.278586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.278633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.278821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.278850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.278942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.278972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.279933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.279963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.140 qpair failed and we were unable to recover it. 00:34:41.140 [2024-07-11 11:20:55.280882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.140 [2024-07-11 11:20:55.280911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.281825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.281860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.282000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.282049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.282230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.282273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.282428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.282474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.282626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.282656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.282750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.282784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.282925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.282980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.283112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.283167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.283284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.283320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.283452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.283482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.283633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.283662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.283803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.283846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.283964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.284018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.284136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.284186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.284376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.284423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.284608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.284656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.284828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.284880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.285934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.285964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.286110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.286152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.286268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.286304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.286420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.286455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.286575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.286603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.286719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.286747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.286893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.286938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.287155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.287200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.287353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.287399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.287570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.287616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.287761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.287792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.287917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.287960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.288064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.288138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.288282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.288329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.288537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.288586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.141 [2024-07-11 11:20:55.288778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.141 [2024-07-11 11:20:55.288828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.141 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.288950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.288979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.289118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.289166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.289343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.289391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.289552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.289600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.289749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.289791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.289895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.289923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.290051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.290102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.290261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.290316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.290469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.290520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.290651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.290681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.290781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.290812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.290914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.290975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.291242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.291467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.291513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.291690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.291738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.291891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.291921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.292062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.292124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.292297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.292354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.292558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.292605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.292777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.292827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.292944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.292974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.293136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.293182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.293365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.293411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.293574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.293623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.293784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.293814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.293901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.293959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.294179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.294226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.294402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.294450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.294631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.294679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.294838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.294868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.294982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.295114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.295239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.295397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.295580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.295743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.142 qpair failed and we were unable to recover it. 00:34:41.142 [2024-07-11 11:20:55.295895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.142 [2024-07-11 11:20:55.295952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.296889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.296983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.297887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.297988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.298847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.298986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.299015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.299254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.299300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.299483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.299532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.299689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.299717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.299843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.299872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.299995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.300040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.300221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.300268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.300454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.300502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.300665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.300694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.300826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.300856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.300979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.301057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.301248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.301318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.301484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.301534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.301709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.301769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.301910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.301949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.302089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.302117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.302242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.302277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.302429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.302487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.302655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.302715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.302859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.302888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.303047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.303093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.303223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.303270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.143 [2024-07-11 11:20:55.303445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.143 [2024-07-11 11:20:55.303491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.143 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.303660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.303689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.303798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.303827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.303953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.303994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.304126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.304172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.304354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.304402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.304591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.304639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.304783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.304813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.304908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.304937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.305056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.305091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.305231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.305277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.305418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.305465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.305643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.305691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.305876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.305908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.306008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.306035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.306149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.306219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.306350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.306401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.306588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.306639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.306766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.306795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.306941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.306994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.307141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.307268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.307452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.307585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.307713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.307873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.307971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.308000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.308118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.308147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.308270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.308319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.308533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.308581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.308727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.308807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.308934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.308963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.309122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.309150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.309249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.309277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.309371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.309402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.309559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.309612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.309740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.309781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.309916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.309968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.310124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.310173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.310329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.310503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.310533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.310645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.310688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.310818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.310851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.310982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.311012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.311160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.311189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.144 [2024-07-11 11:20:55.311277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.144 [2024-07-11 11:20:55.311330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.144 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.311545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.311575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.311695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.311725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.311826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.311855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.311944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.311973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.312159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.312208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.312428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.312476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.312669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.312716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.312861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.312891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.313000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.313116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.313145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.313284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.313331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.313516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.313563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.313743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.313821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.313925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.313954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.314041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.314070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.314162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.314191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.314354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.314409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.314580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.314628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.314749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.314788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.314921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.314973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.315924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.315953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.316043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.316072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.316244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.316290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.316445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.316492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.316631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.316685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.316846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.316905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.317077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.317130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.317236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.317302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.317438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.317489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.317633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.317664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.317748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.317784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.317878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.317908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.318918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.145 [2024-07-11 11:20:55.318947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.145 qpair failed and we were unable to recover it. 00:34:41.145 [2024-07-11 11:20:55.319088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.319134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.319296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.319343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.319522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.319569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.319731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.319769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.319862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.319891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.320058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.320109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.320218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.320270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.320433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.320484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.320573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.320602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.320701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.320731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.320907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.320953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.321156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.321223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.321481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.321531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.321698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.321745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.321911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.321939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.322075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.322119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.322297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.322345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.322527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.322571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.322734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.322771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.322897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.322926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.323118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.323183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.323340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.323386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.323565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.323611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.323740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.323776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.323878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.323913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.324009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.324065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.324248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.324277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.324411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.324456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.324599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.324628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.324721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.324750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.324908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.324954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.325121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.325166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.325370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.325416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.325561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.325592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.325732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.325777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.325898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.325928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.326104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.326149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.326289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.326333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.326503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.326560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.326681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.326710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.146 [2024-07-11 11:20:55.326809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.146 [2024-07-11 11:20:55.326839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.146 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.326972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.327023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.327184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.327237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.327372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.327428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.327547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.327576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.327696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.327726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.327883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.327936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.328890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.328920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.329016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.329046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.329163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.329192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.329338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.329382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.329556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.329601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.329782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.329830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.329997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.330041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.330229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.330280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.330481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.330530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.330691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.330720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.330874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.330904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.330991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.331056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.331285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.331330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.331498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.331542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.331741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.331803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.331935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.331966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.332134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.332350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.332403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.332529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.332579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.332704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.332733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.332891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.332943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.333895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.333924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.334067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.334096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.334212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.334241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.334370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.147 [2024-07-11 11:20:55.334399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.147 qpair failed and we were unable to recover it. 00:34:41.147 [2024-07-11 11:20:55.334489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.334516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.334599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.334627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.334719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.334748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.334856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.334885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.334975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.335937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.335966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.336865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.336908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.337894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.337950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.338889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.338918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.339962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.339991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.340085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.340113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.340230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.340259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.340415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.340457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.340613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.340655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.340823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.148 [2024-07-11 11:20:55.340852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.148 qpair failed and we were unable to recover it. 00:34:41.148 [2024-07-11 11:20:55.341006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.341048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.341216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.341258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.341434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.341477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.341602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.341631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.341734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.341768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.341866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.341896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.342023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.342066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.342238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.342279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.342456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.342497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.342677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.342706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.342838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.342868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.342987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.343017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.343212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.343254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.343428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.343470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.343605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.343648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.343778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.343812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.343936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.343965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.344084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.344114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.344242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.344283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.344444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.344486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.344697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.344726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.344828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.344858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.344979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.345009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.345102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.345130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.345287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.345341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.345491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.345547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.345668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.345697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.345793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.345822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.346967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.346999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.347182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.347329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.347491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.347616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.347735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.347892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.347989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.149 [2024-07-11 11:20:55.348017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.149 qpair failed and we were unable to recover it. 00:34:41.149 [2024-07-11 11:20:55.348158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.348202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.348330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.348360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.348453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.348483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.348606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.348635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.348761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.348811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.348949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.348978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.349126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.349155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.349308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.349349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.349493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.349535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.349674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.349703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.349830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.349860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.349959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.349987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.350141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.350183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.350374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.350417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.350593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.350622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.350742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.350776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.350876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.350906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.351000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.351029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.351188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.351229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.351396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.351439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.351600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.351652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.351809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.351839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.351966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.351996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.352120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.352149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.352302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.352343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.352478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.352520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.352680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.352708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.352847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.352876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.353027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.353056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.353144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.353173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.353360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.353539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.353589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.353794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.353824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.150 [2024-07-11 11:20:55.353950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.150 [2024-07-11 11:20:55.353979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.150 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.354081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.354109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.354262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.354291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.354451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.354493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.354655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.354733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.354914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.354943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.355071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.355103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.355257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.355312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.355435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.355488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.355609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.355638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.355789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.355817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.355975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.356027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.356178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.356228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.356375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.356424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.356573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.356601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.356726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.356762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.356914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.356956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.357085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.357128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.357266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.357307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.357419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.357448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.357570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.357599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.357728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.357764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.357891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.357921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.358063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.358105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.358243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.358285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.358456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.358497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.358665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.358694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.358792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.358820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.358944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.358973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.359070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.359099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.359257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.359300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.359460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.359503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.359702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.359744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.359893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.359922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.360074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.360103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.360223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.360280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.360479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.360527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.360618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.360646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.360742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.360776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.360872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.360900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.361043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.361101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.361256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.361305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.361463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.151 [2024-07-11 11:20:55.361514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.151 qpair failed and we were unable to recover it. 00:34:41.151 [2024-07-11 11:20:55.361611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.361640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.361741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.361785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.361930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.361983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.362128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.362176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.362290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.362323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.362438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.362467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.362615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.362644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.362737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.362776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.362920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.362972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.363966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.363995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.364085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.364114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.364238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.364281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.364461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.364503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.364666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.364708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.364873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.364903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.365079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.365294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.365415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.365539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.365685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.365852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.365981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.366159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.366280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.366429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.366587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.366736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.366898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.366927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.367017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.367047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.367210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.367253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.367381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.367599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.367628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.367725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.367759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.367882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.368030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.368059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.368184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.368227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.368390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.368432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.152 qpair failed and we were unable to recover it. 00:34:41.152 [2024-07-11 11:20:55.368644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.152 [2024-07-11 11:20:55.368672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.368769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.368803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.368915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.368944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.369033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.369061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.369206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.369248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.369421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.369464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.369606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.369636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.369764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.369793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.369941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.369970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.370057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.370213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.370257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.370401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.370444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.370598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.370640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.370794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.370823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.370939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.370968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.371092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.371150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.371286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.371315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.371528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.371570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.371734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.371768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.371891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.371921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.372047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.372076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.372225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.372254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.372383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.372430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.372621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.372663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.372827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.372857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.372977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.373135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.373261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.373458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.373632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.373788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.373941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.373996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.374941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.374970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.375092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.375121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.375302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.375344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.375522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.153 [2024-07-11 11:20:55.375571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.153 qpair failed and we were unable to recover it. 00:34:41.153 [2024-07-11 11:20:55.375763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.375824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.375951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.375982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.376962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.376990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.377168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.377297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.377436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.377552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.377699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.377856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.377981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.378132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.378253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.378446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.378632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.378796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.378917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.378944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.379067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.379097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.379203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.379245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.379442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.379484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.379651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.379694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.379875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.379920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.380135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.380179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.380356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.380402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.380553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.380598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.380795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.380826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.380948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.381000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.381123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.381178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.381324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.381372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.154 [2024-07-11 11:20:55.381522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.154 [2024-07-11 11:20:55.381551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.154 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.381658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.381687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.381809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.381838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.381935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.381964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.382057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.382086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.382246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.382281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.382379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.382408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.382531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.382560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.382690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.382719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.382875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.382918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.383061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.383103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.383248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.383277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.383422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.383464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.383639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.383681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.383867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.383909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.384046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.384088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.384233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.384419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.384461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.384584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.384630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.384758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.384788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.384904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.384934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.385079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.385108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.385193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.385220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.385376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.385418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.385547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.385590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.385778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.385825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.385928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.385957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.386085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.386114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.386226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.386268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.386466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.386508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.386650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.386693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.386856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.386887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.387015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.387046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.387159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.387217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.387374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.387426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.387517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.387547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.387695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.387724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.387919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.387967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.388123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.388167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.388334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.388376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.388512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.388541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.388663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.388692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.155 qpair failed and we were unable to recover it. 00:34:41.155 [2024-07-11 11:20:55.388791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.155 [2024-07-11 11:20:55.388819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.388945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.388974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.389128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.389170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.389303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.389485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.389527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.389681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.389710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.389845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.389874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.389966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.389995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.390149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.390191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.390348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.390391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.390527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.390568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.390716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.390745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.390877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.390909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.391033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.391061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.391220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.391263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.391445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.391487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.391646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.391688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.391858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.391888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.391981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.392038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.392208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.392250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.392390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.392432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.392590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.392639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.392731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.392767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.392857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.392887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.393011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.393040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.393136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.393164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.393356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.393413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.393548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.393590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.393729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.393768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.393889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.393918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.394967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.394996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.395087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.395117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.395248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.395279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.395401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.395431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.395552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.395581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.395705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.395734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.395891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.395932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.396081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.396135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.396343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.396385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.396522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.156 [2024-07-11 11:20:55.396565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.156 qpair failed and we were unable to recover it. 00:34:41.156 [2024-07-11 11:20:55.396746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.396782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.396934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.396976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.397182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.397223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.397363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.397392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.397572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.397601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.397704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.397733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.397833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.397861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.397984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.398026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.398197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.398240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.398394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.398435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.398567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.398613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.398706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.398735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.398835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.398864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.399064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.399107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.399272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.399315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.399487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.399529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.399682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.399711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.399857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.399901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.400048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.400090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.400269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.400327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.400469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.400526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.400649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.400678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.400769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.400798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.400888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.400917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.401089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.401132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.401256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.401286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.401408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.401438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.401536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.401564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.401658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.401690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.401962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.402006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.402172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.402246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.402503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.402547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.402764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.402794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.403008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.403085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.403370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.403451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.403696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.403766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.403910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.403939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.404062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.404096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.404240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.404270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.404460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.404505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.404675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.404717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.404886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.404917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.405037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.157 [2024-07-11 11:20:55.405066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.157 qpair failed and we were unable to recover it. 00:34:41.157 [2024-07-11 11:20:55.405191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.405220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.405342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.405372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.405512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.405566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.405656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.405684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.405811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.405841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.405983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.406033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.406257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.406305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.406412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.406472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.406603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.406632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.406773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.406803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.406902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.406931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.407869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.407898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.408927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.408970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.409137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.409165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.409286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.409315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.409437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.409465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.409563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.409592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.409690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.409718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.409848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.409878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.410028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.410057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.410149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.410178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.410327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.410356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.410477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.410510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.158 [2024-07-11 11:20:55.410600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.158 [2024-07-11 11:20:55.410629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.158 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.410824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.410910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.411230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.411275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.411460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.411517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.411671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.411699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.411807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.411838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.411993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.412049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.412366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.412415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.412587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.412613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.412764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.412793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.412881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.412937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.413172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.413216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.413383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.413427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.413600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.413644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.413824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.413853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.413961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.413990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.414206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.414273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.414512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.414556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.414746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.414781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.414907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.414936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.415033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.415062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.415325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.415381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.415639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.415704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.415906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.415934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.416050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.416079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.416266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.416307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.416556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.416602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.416823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.416852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.416948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.416976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.417131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.417160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.417401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.417452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.417763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.417792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.417908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.417937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.418091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.418135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.418442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.418506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.418795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.418843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.418931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.418958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.419171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.419212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.419340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.419368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.419678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.159 [2024-07-11 11:20:55.419707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.159 qpair failed and we were unable to recover it. 00:34:41.159 [2024-07-11 11:20:55.419827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.419872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.420061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.420119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.420338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.420391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.420513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.420581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.420702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.420732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.420915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.420969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.421190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.421241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.421399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.421458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.421605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.421634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.421760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.421790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.421882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.421910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.422039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.422188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.422332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.422483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.422676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.422797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.422980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.423029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.423185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.423240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.423377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.423427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.423553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.423582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.423729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.423763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.423887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.423916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.424089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.424148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.424283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.424324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.424541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.424615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.424788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.424819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.424970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.425180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.425357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.425526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.425651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.425796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.425950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.425979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.426090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.426119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.426338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.426509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.426550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.426673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.426717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.426896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.426938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.427106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.427148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.427311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.427359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.427498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.160 [2024-07-11 11:20:55.427551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.160 qpair failed and we were unable to recover it. 00:34:41.160 [2024-07-11 11:20:55.427725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.427814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.427965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.428011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.428188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.428242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.428474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.428528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.428779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.428832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.428953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.428983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.429100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.429130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.429278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.429307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.429485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.429549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.429764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.429819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.429949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.429977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.430102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.430159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.430327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.430384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.430511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.430562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.430684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.430715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.430889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.430947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.431061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.431120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.431244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.431273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.431396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.431426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.431559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.431588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.431701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.431730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.431840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.431884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.432962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.432989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.433132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.433189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.433402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.433460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.433631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.433660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.433766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.433795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.433884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.433911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.434029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.434058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.434262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.434327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.434587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.434644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.434896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.434927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.435021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.435075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.435267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.435347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.435550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.161 [2024-07-11 11:20:55.435608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.161 qpair failed and we were unable to recover it. 00:34:41.161 [2024-07-11 11:20:55.435780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.435810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.435908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.436030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.436057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.436180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.436255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.436477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.436542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.436640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.436667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.436764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.436792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.436886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.436913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.437071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.437124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.437288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.437347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.437455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.437486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.437590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.437619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.437759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.437816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.438103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.438178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.438428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.438485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.438653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.438683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.438816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.438844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.438938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.438966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.439077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.439386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.439442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.439646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.439703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.439863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.439894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.440028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.440116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.440330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.440389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.440540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.440602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.440722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.440769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.440894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.440947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.441886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.441916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.442036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.442070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.442246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.442329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.442507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.442564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.442774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.442835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.442955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.443013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.443231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.443287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.443502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.162 [2024-07-11 11:20:55.443559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.162 qpair failed and we were unable to recover it. 00:34:41.162 [2024-07-11 11:20:55.443728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.443769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.443899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.443931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.444028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.444056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.444279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.444340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.444509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.444679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.444708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.444833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.444863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.444981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.445044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.445259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.445317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.445581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.445659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.445834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.445891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.446169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.446245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.446466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.446495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.446586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.446616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.446714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.446744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.446845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.446912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.447150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.447210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.447362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.447419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.447623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.447652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.447738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.447772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.447919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.447948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.448125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.448182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.448474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.448564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.448791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.448819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.448978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.449007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.449122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.449189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.449415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.449472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.449737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.449823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.449971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.450001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.450126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.450155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.450374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.450430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.450590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.450647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.450842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.450872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.450953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.451023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.451207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.451264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.451513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.451570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.451765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.451794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.451913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.163 [2024-07-11 11:20:55.451942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.163 qpair failed and we were unable to recover it. 00:34:41.163 [2024-07-11 11:20:55.452065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.452095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.452264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.452310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.452514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.452571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.452799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.452829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.452951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.452981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.453079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.453109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.453200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.453264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.453519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.453573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.453696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.453866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.453896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.454037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.454094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.454229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.454280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.454441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.454497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.454647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.454678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.454800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.454830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.454954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.454983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.455207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.455265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.455481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.455538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.455771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.455835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.455951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.455980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.456072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.456100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.456197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.456227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.456395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.456440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.456658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.456715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.456917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.456951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.457175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.457250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.457506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.457563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.457815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.457845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.457965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.458024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.458317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.458401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.458642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.458699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.458879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.458908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.459032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.459061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.459174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.459203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.459404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.459460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.459683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.459739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.459906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.459936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.460153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.460210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.460468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.460543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.460782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.460835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.460983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.461062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.164 [2024-07-11 11:20:55.461314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.164 [2024-07-11 11:20:55.461390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.164 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.461602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.461659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.461833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.461863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.462008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.462037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.462140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.462169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.462339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.462395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.462576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.462634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.462866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.462896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.463016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.463045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.463194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.463250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.463507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.463564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.463728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.463764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.463888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.463916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.464039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.464068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.464216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.464273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.464496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.464552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.464774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.464821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.464970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.464999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.465096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.465125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.465274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.465304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.465485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.465516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.465640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.465669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.465792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.465822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.465946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.465980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.466110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.466139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.466277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.466306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.466462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.466492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.466611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.466640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.466765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.466918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.466948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.467033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.467091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.467382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.467457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.467650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.467681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.467810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.467839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.467934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.467962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.468104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.468134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.468317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.468369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.468630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.468687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.468884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.468914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.469032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.469062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.469181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.469211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.469351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.469381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.469617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.469645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.469763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.165 [2024-07-11 11:20:55.469792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.165 qpair failed and we were unable to recover it. 00:34:41.165 [2024-07-11 11:20:55.469888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.469918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.470016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.470045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.470149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.470221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.470502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.470559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.470817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.470848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.470977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.471007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.471226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.471285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.471502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.471559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.471776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.471822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.471911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.471941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.472062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.472090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.472211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.472448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.472503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.472695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.472724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.472857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.472887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.473036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.473065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.473213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.473243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.473365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.473394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.473566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.473623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.473853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.473887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.474012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.474041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.474224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.474280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.474483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.474540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.474731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.474815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.474936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.474966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.475073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.475130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.475337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.475393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.475611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.475667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.475855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.475884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.475983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.476011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.476161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.476190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.476299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.476328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.476447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.476476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.476653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.476710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.476897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.476926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.477078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.477107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.477279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.477343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.477539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.477592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.477690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.477719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.477842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.477872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.166 qpair failed and we were unable to recover it. 00:34:41.166 [2024-07-11 11:20:55.478070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.166 [2024-07-11 11:20:55.478130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.478312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.478372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.478609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.478662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.478765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.478795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.478986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.479141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.479327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.479475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.479617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.479771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.479945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.479974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.480058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.480088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.480216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.480246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.480394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.480423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.480540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.480570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.480685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.480714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.480822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.480852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.481965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.481994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.482121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.482151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.482272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.482302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.482436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.482480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.482637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.482668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.482762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.482792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.483026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.167 qpair failed and we were unable to recover it. 00:34:41.167 [2024-07-11 11:20:55.483308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.167 [2024-07-11 11:20:55.483383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.483613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.483643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.483777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.483809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.483960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.483989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.484078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.484129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.484449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.484522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.484797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.484826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.484957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.485032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.485336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.485411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.485666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.485723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.485910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.485939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.486096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.486175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.486454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.486529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.486782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.486833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.486953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.486982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.487079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.487109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.487279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.487337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.487626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.487683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.487883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.487912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.488009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.488038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.488174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.488203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.488418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.488477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.488695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.488724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.488853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.488882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.489030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.489060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.489185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.489214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.489392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.489421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.489630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.489687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.489914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.489944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.490029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.490063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.490163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.490193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.490320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.490349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.490567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.168 [2024-07-11 11:20:55.490624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.168 qpair failed and we were unable to recover it. 00:34:41.168 [2024-07-11 11:20:55.490826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.490856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.491002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.491031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.491235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.491312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.491567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.491625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.491866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.491896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.492051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.492129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.492387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.492444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.492667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.492723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.492896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.492925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.493073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.493101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.493232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.493261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.493408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.493467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.493728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.493810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.493959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.493988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.494137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.494166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.494289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.494317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.494412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.494441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.494631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.494687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.494925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.494983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.495274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.495349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.495562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.495619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.495828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.495907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.496168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.496244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.496576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.496813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.496893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.497094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.497170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.497379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.497437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.497690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.497746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.498054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.498137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.498406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.498481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.498650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.498708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.498970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.499046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.499296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.499371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.499592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.499650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.499925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.500001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.500296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.500368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.500550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.500616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.500787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.500845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.501094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.501171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.501428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.501485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.169 qpair failed and we were unable to recover it. 00:34:41.169 [2024-07-11 11:20:55.501666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.169 [2024-07-11 11:20:55.501723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.501999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.502076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.502369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.502444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.502659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.502714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.502981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.503057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.503347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.503421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.503655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.503712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.503969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.504046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.504243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.504319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.504535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.504593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.504893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.504970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.505262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.505336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.505521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.505578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.505812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.505871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.506099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.506156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.506394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.506451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.506705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.506774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.507027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.507084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.507346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.507402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.507617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.507674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.507920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.507997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.508225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.508281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.508488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.508545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.508816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.508875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.509134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.509469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.509543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.509749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.509830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.510130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.510205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.510513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.510570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.510859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.510936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.511177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.511418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.511477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.511715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.511785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.511981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.512059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.512349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.512423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.512673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.512730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.513044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.513136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.513426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.513501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.513729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.513801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.514072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.514129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.170 qpair failed and we were unable to recover it. 00:34:41.170 [2024-07-11 11:20:55.514428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.170 [2024-07-11 11:20:55.514501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.514693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.514749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.515063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.515121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.515334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.515391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.515603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.515660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.516010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.516094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.516395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.516471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.516652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.516711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.516958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.517017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.517262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.517336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.517596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.517654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.517955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.518038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.518235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.518313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.518526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.518583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.518827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.518887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.519129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.519206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.519418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.519495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.519697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.519763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.520030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.520112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.520370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.520445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.520711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.520779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.521039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.521116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.521404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.521479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.521720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.521797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.522006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.522091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.522300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.522377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.171 [2024-07-11 11:20:55.522605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.171 [2024-07-11 11:20:55.522663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.171 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-11 11:20:55.522965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-11 11:20:55.523054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-11 11:20:55.523313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-11 11:20:55.523387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-11 11:20:55.523572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.450 [2024-07-11 11:20:55.523633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.450 qpair failed and we were unable to recover it. 00:34:41.450 [2024-07-11 11:20:55.523940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.524015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.524300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.524375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.524567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.524626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.524844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.524922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.525129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.525203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.525504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.525579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.525795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.525861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.526100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.526175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.526404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.526478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.526732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.526801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.527099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.527174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.527365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.527443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.527668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.527725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.528041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.528115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.528366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.528440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.528655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.528711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.529060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.529160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.529448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.529520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.529828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.529894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.530189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.530255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.530517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.530582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.530845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.530912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.531211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.531276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.531492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.531557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.531812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.531869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.532123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.532179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.532498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.532570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.532871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.532929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.533187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.533242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.533558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.533622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.533928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.534236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.534291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.534607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.534687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.534984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.535049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.535334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.535398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.535683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.535748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.536038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.536094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.536349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.536414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.536672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.536737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.537051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.537116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.451 [2024-07-11 11:20:55.537451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.451 [2024-07-11 11:20:55.537515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.451 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.537796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.537853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.538088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.538161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.538465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.538545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.538860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.538918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.539226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.539289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.539616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.539680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.539996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.540081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.540333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.540719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.540830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.541048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.541127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.541375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.541440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.541708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.541786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.542028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.542108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.542410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.542490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.542837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.542913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.543205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.543271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.543565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.543631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.543899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.543955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.544163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.544228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.544504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.544568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.544822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.544879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.545122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.545189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.545488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.545553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.545851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.545908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.546161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.546229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.546480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.546544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.546783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.546841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.547121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.547197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.547487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.547551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.547867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.547924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.548176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.548240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.548488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.548555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.548855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.548922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.549184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.549249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.549496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.549567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.549869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.549936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.550189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.550253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.550501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.550566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.550869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.550935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.551235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.551299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.452 [2024-07-11 11:20:55.551582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.452 [2024-07-11 11:20:55.551647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.452 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.551902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.551970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.552233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.552299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.552551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.552615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.552902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.552969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.553197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.553265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.553567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.553632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.553968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.554044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.554292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.554357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.554610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.554675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.554918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.554983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.555228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.555292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.555540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.555604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.555814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.555881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.556142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.556206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.556449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.556514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.556809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.556875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.557180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.557244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.557534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.557598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.557857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.557923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.558165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.558239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.558529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.558594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.558894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.558960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.559210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.559274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.559561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.559625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.559872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.559940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.560233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.560298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.560596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.560660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.560908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.560973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.561231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.561296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.561534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.561598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.561812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.561881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.562174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.562239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.562543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.562608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.562912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.562980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.563289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.563354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.563605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.563670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.563992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.564059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.564318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.564382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.564673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.564737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.565048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.565113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.565367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.453 [2024-07-11 11:20:55.565431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.453 qpair failed and we were unable to recover it. 00:34:41.453 [2024-07-11 11:20:55.565682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.565748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.566082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.566148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.566428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.566491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.566789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.566855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.568194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.568266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.568569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.568648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.568888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.568957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.569217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.569283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.569577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.569642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.569910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.569977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.570271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.570336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.570596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.570661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.570909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.570975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.571270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.571335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.571631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.571695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.571973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.572039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.572341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.572406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.572617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.572680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.572909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.572940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.573083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.573116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.573255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.573287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.573401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.573432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.573539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.573572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.573703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.573736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.573884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.573917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.574044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.574075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.574228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.574260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.574416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.574449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.574549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.574580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.574718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.574750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.574921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.574953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.575086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.575118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.575248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.575280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.575416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.575448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.575582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.575615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.575776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.575823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.575946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.575975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.576066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.576096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.576219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.576249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.576409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.576458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.576719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.576812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.454 qpair failed and we were unable to recover it. 00:34:41.454 [2024-07-11 11:20:55.576938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.454 [2024-07-11 11:20:55.576969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.577094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.577123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.577254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.577283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.577531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.577595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.577854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.577885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.578012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.578043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.578189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.578254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.578524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.578554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.578835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.578865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.578989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.579019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.579163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.579196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.579384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.579448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.579662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.579725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.579910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.579939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.580085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.580114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.580257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.580291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.580435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.580511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.580715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.580744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.580882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.580911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.581016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.581063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.581178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.581207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.581384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.581447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.581700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.581779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.581926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.581956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.582059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.582088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.582290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.582354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.582554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.582619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.582830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.582860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.582962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.582993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.583140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.583169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.583321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.583385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.583682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.583711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.583816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.583849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.584010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.584039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.584160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.455 [2024-07-11 11:20:55.584236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.455 qpair failed and we were unable to recover it. 00:34:41.455 [2024-07-11 11:20:55.584495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.584560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.584735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.584779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.584920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.584950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.585119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.585185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.585431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.585496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.585742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.585820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.585939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.585968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.586107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.586136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.586293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.586359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.586627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.586691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.586891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.586922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.587029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.587058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.587170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.587216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.587482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.587546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.587799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.587829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.587931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.587961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.588084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.588114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.588350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.588416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.588639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.588704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.588901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.588930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.589071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.589133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.589312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.589369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.589563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.589627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.589862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.589892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.590016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.590082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.590381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.590446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.590687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.590797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.590981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.591015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.591215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.591256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.591445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.591510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.591748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.591840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.592054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.592098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.592359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.592402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.592641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.592703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.592937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.592981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.593189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.593255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.593506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.593572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.593815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.593860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.594064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.594128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.594419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.594483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.594768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.594823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.456 qpair failed and we were unable to recover it. 00:34:41.456 [2024-07-11 11:20:55.595004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.456 [2024-07-11 11:20:55.595077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.595321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.595385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.595640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.595684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.595884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.595928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.596102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.596146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.596371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.596416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.596704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.596785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.597012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.597064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.597326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.597372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.597589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.597654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.597928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.597985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.598226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.598272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.598616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.598901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.598948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.599146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.599200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.599479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.599543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.599797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.599864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.600096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.600144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.600305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.600354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.600611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.600678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.600956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.601006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.601282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.601347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.601654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.601718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.602031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.602083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.602342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.602408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.602660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.602723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.603006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.603070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.603326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.603391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.603586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.603652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.603953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.604006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.604290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.604342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.604548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.604611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.604878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.604931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.605221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.605286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.605543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.605608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.605951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.606008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.606171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.606261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.606583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.606639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.606880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.606938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.607245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.607302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.607602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.457 [2024-07-11 11:20:55.607666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.457 qpair failed and we were unable to recover it. 00:34:41.457 [2024-07-11 11:20:55.607949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.608016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.608318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.608382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.608636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.608701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.608969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.609036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.609310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.609375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.609585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.609649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.609992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.610053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.610362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.610427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.610737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.610810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.611086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.611156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.611443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.611508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.611732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.611854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.612083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.612153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.612472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.612538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.612794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.612860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.613176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.613236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.613506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.613570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.613868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.613935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.614230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.614295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.614551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.614615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.614870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.614937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.615229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.615294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.615561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.615625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.615913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.615980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.616237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.616303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.616589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.616653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.616971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.617042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.617295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.617360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.617584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.617649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.617933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.617999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.618210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.618274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.618565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.618630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.618935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.619002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.619261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.619328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.619631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.619696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.620090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.620157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.620444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.620509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.620781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.620863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.621129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.621194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.621492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.621557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.621807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.621873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.458 qpair failed and we were unable to recover it. 00:34:41.458 [2024-07-11 11:20:55.622174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.458 [2024-07-11 11:20:55.622247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.622540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.622605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.622896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.622962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.623254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.623318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.623608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.623673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.623993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.624068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.624356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.624421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.624668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.624733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.624966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.625041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.625331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.625396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.625655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.625722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.626051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.626116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.626362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.626427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.626714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.626802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.627080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.627144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.627386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.627448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.627697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.627790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.628034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.628101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.628385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.628450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.628700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.628784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.629020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.629085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.629345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.629409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.629693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.629773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.630037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.630111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.630401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.630465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.630769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.630835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.631083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.631148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.631446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.631511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.631823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.631890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.632150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.632214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.632465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.632528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.632737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.632825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.633116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.633179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.633478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.633541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.633810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.633876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.634177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.634243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.459 qpair failed and we were unable to recover it. 00:34:41.459 [2024-07-11 11:20:55.634536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.459 [2024-07-11 11:20:55.634600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.634871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.634937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.635206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.635270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.635529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.635594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.635838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.635903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.636195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.636263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.636485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.636546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.636793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.636859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.637057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.637122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.637408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.637472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.637777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.637854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.638158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.638222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.638482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.638542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.638825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.638891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.639157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.639222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.639519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.639583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.639818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.639885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.640155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.640221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.640514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.640579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.640873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.640939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.641154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.641219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.641462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.641537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.641742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.641825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.642114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.642180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.642476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.642540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.642834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.642900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.643108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.643174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.643466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.643532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.643810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.643877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.644133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.644197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.644484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.644549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.644799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.644866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.645107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.645174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.645407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.645472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.645780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.645846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.646135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.646201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.646498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.646563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.646888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.646953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.647238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.647303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.647549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.647616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.647918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.647985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.648252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.460 [2024-07-11 11:20:55.648317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.460 qpair failed and we were unable to recover it. 00:34:41.460 [2024-07-11 11:20:55.648593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.648658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.648952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.649019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.649303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.649368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.649652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.649718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.650003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.650077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.650365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.650430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.650641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.650706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.650981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.651045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.651266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.651327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.651561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.651628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.651938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.652003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.652305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.652370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.652651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.652716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.653032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.653106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.653411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.653476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.653780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.653855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.654106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.654172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.654460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.654526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.654780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.654845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.655148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.655212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.655505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.655570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.655811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.655878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.656174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.656239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.656539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.656605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.656876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.656941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.657196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.657261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.657546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.657612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.657846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.657913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.658213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.658278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.658573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.658637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.658934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.659000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.659299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.659365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.659657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.659722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.659989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.660065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.660346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.660411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.660627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.660692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.660998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.661074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.661314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.661378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.661666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.661731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.662015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.662081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.662317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.662391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.461 qpair failed and we were unable to recover it. 00:34:41.461 [2024-07-11 11:20:55.662644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.461 [2024-07-11 11:20:55.662709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.663020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.663087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.663373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.663438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.663684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.663748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.664076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.664141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.664356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.664422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.664671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.664737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.665014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.665078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.665325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.665390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.665676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.665740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.666001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.666068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.666358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.666424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.666717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.666799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.667057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.667121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.667362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.667425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.667716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.667799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.668056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.668123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.668419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.668484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.668787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.668854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.669148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.669213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.669465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.669529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.669739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.670105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.670170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.670414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.670480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.670777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.670844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.671140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.671205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.671445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.671520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.671808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.671874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.672176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.672242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.672531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.672596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.672822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.672888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.673186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.673251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.673466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.673531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.673784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.673850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.674090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.674156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.674448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.674511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.674770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.674835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.675114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.675170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.675398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.675454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.675669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.675744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.676032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.676097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.462 [2024-07-11 11:20:55.676381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.462 [2024-07-11 11:20:55.676446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.462 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.676654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.676714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.676932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.676993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.677238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.677298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.677561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.677621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.677902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.677963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.678153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.678216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.678519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.678584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.678870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.678937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.679120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.679185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.679452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.679516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.679821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.679888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.680151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.680215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.680459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.680524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.680828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.680894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.681179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.681244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.681503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.681568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.681818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.681886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.682147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.682211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.682470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.682536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.682786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.682853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.683082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.683148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.683377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.683442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.683734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.683812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.684102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.684167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.684460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.684524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.684781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.684846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.685133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.685198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.685469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.685534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.685833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.685901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.686146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.686213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.686500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.686564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.686854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.686922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.687170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.687235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.687524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.687588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.687813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.687880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.688168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.463 [2024-07-11 11:20:55.688233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.463 qpair failed and we were unable to recover it. 00:34:41.463 [2024-07-11 11:20:55.688509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.688574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.688828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.688894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.689180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.689246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.689542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.689607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.689905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.689972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.690201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.690266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.690480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.690546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.690776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.690843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.691132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.691198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.691489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.691553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.691783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.691852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.692064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.692129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.692408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.692474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.692727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.692811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.693070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.693135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.693421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.693486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.693786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.693863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.694113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.694179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.694430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.694495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.694741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.694821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.695112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.695177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.695417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.695483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.695707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.695803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.696106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.696172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.696442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.696507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.696730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.696814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.697070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.697136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.697421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.697487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.697695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.697773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.698041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.698106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.698399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.698464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.698747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.698825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.699093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.699158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.699413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.699477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.699694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.699771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.700032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.700098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.700355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.700419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.700623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.700689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.700960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.701026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.701269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.701334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.701625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.701690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.464 [2024-07-11 11:20:55.701995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.464 [2024-07-11 11:20:55.702061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.464 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.702330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.702395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.702654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.702733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.703073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.703138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.703337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.703403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.703659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.703725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.704046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.704110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.704323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.704387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.704604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.704669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.704938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.705005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.705210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.705276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.705474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.705539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.705749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.705832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.706052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.706118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.706348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.706413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.706660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.706725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.707012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.707078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.707293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.707359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.707619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.707685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.707950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.708018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.708228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.708293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.708584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.708650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.708905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.708972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.709223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.709289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.709499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.709564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.709775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.709841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.710069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.710135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.710414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.710479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.710693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.710790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.711007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.711073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.711319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.711384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.711596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.711662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.711927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.711995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.712241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.712307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.712543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.712608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.712824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.712893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.713134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.713200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.713418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.713482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.713689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.713768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.713969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.714034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.714270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.714335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.714610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.465 [2024-07-11 11:20:55.714675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.465 qpair failed and we were unable to recover it. 00:34:41.465 [2024-07-11 11:20:55.714977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.715043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.715300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.715368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.715603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.715668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.715938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.716005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.716250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.716315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.716532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.716598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.716865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.716932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.717176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.717241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.717491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.717556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.717820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.717887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.718143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.718209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.718462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.718527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.718745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.718823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.719079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.719144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.719390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.719455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.719682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.719747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.720004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.720070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.720279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.720344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.720596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.720932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.720999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.721224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.721289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.721504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.721568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.721790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.721858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.722075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.722142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.722352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.722418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.722631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.722697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.722956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.723023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.723262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.723328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.723577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.723924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.723992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.724214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.724279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.724535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.724600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.724815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.724882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.725101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.725166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.725406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.725471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.725707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.725785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.726045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.726110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.726374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.726440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.726651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.726716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.726944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.727009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.727282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.727347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.466 [2024-07-11 11:20:55.727585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.466 [2024-07-11 11:20:55.727651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.466 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.727963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.728029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.728344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.728409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.728701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.728791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.729088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.729153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.729383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.729448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.729688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.729772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.730006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.730072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.730285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.730350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.730575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.730639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.730925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.730991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.731286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.731350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.731569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.731635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.731843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.731910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.732169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.732245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.732471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.732535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.732822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.732889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.733136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.733202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.733447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.733512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.733817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.733884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.734153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.734218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.734470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.734535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.734793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.734886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.735184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.735250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.735534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.735599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.735892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.735959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.736267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.736583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.736648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.736928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.736997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.737215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.737280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.737514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.737578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.737842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.737908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.738165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.738230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.738497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.738562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.738850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.738917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.739161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.739227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.739484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.739550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.467 [2024-07-11 11:20:55.739820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.467 [2024-07-11 11:20:55.739888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.467 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.740108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.740174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.740419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.740775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.740841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.741087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.741162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.741442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.741508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.741808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.741875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.742111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.742176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.742390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.742455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.742696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.742776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.743026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.743092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.743308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.743373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.743626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.743690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.743908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.744189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.744254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.744495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.744560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.744843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.744910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.745169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.745234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.745484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.745550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.745737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.745817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.746064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.746129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.746414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.746478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.746690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.746772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.747011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.747077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.747308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.747372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.747591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.747654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.747905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.747975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.748267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.748332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.748545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.748609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.748839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.748906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.749147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.749213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.749506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.749571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.749809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.749876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.750129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.750196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.750451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.750515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.750720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.750821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.751064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.751131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.751358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.751423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.751672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.751737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.751980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.468 [2024-07-11 11:20:55.752045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.468 qpair failed and we were unable to recover it. 00:34:41.468 [2024-07-11 11:20:55.752258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.752323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.752572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.752636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.752888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.752955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.753203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.753268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.753491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.753556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.753818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.753885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.754181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.754246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.754481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.754546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.754765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.754831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.755025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.755091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.755344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.755408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.755650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.755715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.756008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.756074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.756333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.756397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.756641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.756706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.756969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.757036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.757281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.757346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.757632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.757698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.757968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.758034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.758342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.758407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.758626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.758691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.758959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.759026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.759239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.759303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.759556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.759621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.759835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.759902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.760181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.760247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.760525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.760591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.760828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.760895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.761105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.761172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.761392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.761458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.761683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.761748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.762057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.762121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.762377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.762452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.762658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.762723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.762951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.763016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.763260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.763325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.763567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.763632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.763842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.763908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.764191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.764256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.764502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.764567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.764774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.764839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.469 [2024-07-11 11:20:55.765086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.469 [2024-07-11 11:20:55.765152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.469 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.765415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.765481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.765781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.765848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.766151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.766216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.766401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.766467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.766718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.767012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.767077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.767323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.767387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.767683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.767748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.768042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.768107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.768315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.768380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.768590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.768655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.768965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.769031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.769285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.769352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.769580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.769645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.769910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.769977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.770193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.770259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.770553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.770618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.770866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.770942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.771201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.771267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.771520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.771584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.771830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.771898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.772150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.772214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.772502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.772568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.772831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.772898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.773159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.773224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.773480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.773544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.773825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.773894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.774183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.774249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.774458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.774738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.774819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.775042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.775107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.775331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.775397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.775645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.775709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.775950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.776015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.776231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.776296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.776544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.776608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.776829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.776896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.777121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.777187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.777438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.777503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.777750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.777830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.778113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.470 [2024-07-11 11:20:55.778178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.470 qpair failed and we were unable to recover it. 00:34:41.470 [2024-07-11 11:20:55.778451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.778515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.778805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.778874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.779139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.779204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.779455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.779519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.779741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.779821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.780078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.780142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.780389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.780454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.780706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.780796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.781087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.781152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.781396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.781463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.781713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.781795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.782064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.782130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.782415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.782480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.782724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.782804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.783067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.783131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.783379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.783444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.783700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.783778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.784003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.784068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.784307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.784372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.784671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.784736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.784962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.785027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.785239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.785304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.785593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.785657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.785901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.785969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.786198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.786263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.786465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.786531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.786749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.786834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.787106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.787170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.787453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.787518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.787794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.787861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.788123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.788188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.788510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.788832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.788898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.789107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.789171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.789412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.789476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.789699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.789779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.790039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.790104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.790321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.790386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.790598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.790663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.790972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.791038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.791248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.791313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.471 [2024-07-11 11:20:55.791526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.471 [2024-07-11 11:20:55.791590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.471 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.791853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.791920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.792178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.792243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.792501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.792575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.792859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.792926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.793171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.793236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.793475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.793540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.793826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.793894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.794148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.794214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.794419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.794483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.794698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.794776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.795018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.795083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.795376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.795442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.795691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.795773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.795967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.796031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.796238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.796304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.796548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.796613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.796871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.796939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.797230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.797296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.797507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.797573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.797831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.797899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.798113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.798180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.798390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.798455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.798749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.798828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.799077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.799141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.799382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.799448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.799640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.799707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.800014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.800080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.800328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.800394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.800634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.800701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.800964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.801040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.801286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.801351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.801571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.801636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.801892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.801959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.802207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.802272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.802476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.802540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.802806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.472 [2024-07-11 11:20:55.802873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.472 qpair failed and we were unable to recover it. 00:34:41.472 [2024-07-11 11:20:55.803096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.803162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.803445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.803510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.803733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.803815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.804119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.804185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.804428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.804493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.804739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.804832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.805056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.805122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.805385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.805449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.805704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.805790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.806038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.806104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.806322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.806388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.806677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.806742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.807020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.807085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.807315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.807380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.807631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.807696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.807967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.808033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.808329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.808395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.808641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.808706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.808920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.808985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.809209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.809275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.809573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.809647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.809874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.809941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.810205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.810270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.810531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.810597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.810837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.810905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.811192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.811258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.811523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.811588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.811812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.811878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.812159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.812225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.812489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.812553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.812819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.812886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.813194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.813259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.813521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.813586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.813836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.813902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.814164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.814495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.814561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.814771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.814838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.815069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.815134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.815349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.815414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.815629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.815695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.815943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.816011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.473 [2024-07-11 11:20:55.816288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.473 [2024-07-11 11:20:55.816354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.473 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.816609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.816674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.816946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.817013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.817275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.817340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.817538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.817604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.817862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.817929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.818145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.818210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.818510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.818576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.818790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.818857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.819111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.819176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.819407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.819473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.819695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.819774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.820069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.820134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.820382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.820447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.820729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.820823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.821065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.821131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.821347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.821414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.821637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.821702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.821985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.822051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.822314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.822378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.822632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.822698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.822936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.823003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.823251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.823316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.823562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.823627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.823885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.823952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.824192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.824257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.824528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.824593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.824854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.824920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.825150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.825215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.825467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.825532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.825783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.825849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.826060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.826126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.826375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.826441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.826729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.826812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.827030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.827096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.827347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.827412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.827666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.827730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.827962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.828029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.828239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.828304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.828548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.828615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.828853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.828921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.829190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.474 [2024-07-11 11:20:55.829255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.474 qpair failed and we were unable to recover it. 00:34:41.474 [2024-07-11 11:20:55.829518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.829583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.829829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.829896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.830094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.830161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.830404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.830471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.830780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.830845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.831147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.831227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.831515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.831581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.831848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.831914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.832134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.832200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.832454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.832520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.832813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.832879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.833093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.833158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.833409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.833475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.833724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.833804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.834037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.834102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.834394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.834459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.834748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.834827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.835117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.835387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.835452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.835689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.835768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.836033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.836098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.836349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.836414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.836683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.836748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.837005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.837071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.837329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.837394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.837679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.837743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.838068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.838132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.838395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.838460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.838718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.838801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.839093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.839157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.839445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.839510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.839788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.839855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.840089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.840164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.840461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.840527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.840786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.840852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.841101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.841166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.841416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.841479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.841699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.841777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.842025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.842091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.842381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.842445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.842644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.842709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.475 qpair failed and we were unable to recover it. 00:34:41.475 [2024-07-11 11:20:55.842941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.475 [2024-07-11 11:20:55.843006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.843290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.843354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.843608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.843672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.843936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.844003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.844223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.844287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.844495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.844560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.844812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.844879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.845130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.845195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.845411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.845475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.845703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.846019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.846083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.846287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.846353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.846603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.846669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.846926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.846993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.847229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.847294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.847624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.847688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.847996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.848062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.848303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.848368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.848622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.848687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.848969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.849037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.849293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.849358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.849654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.849719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.850012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.850078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.850314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.850379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.850569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.850632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.850893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.850957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.851241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.851306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.851587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.851652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.851892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.851964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.852225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.852292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.852546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.852612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.852855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.852922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.853164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.853229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.853444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.853510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.853724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.853803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.854021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.854084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.476 qpair failed and we were unable to recover it. 00:34:41.476 [2024-07-11 11:20:55.854323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.476 [2024-07-11 11:20:55.854387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.854676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.854745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.855061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.855126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.855410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.855475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.855741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.855820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.856113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.856179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.856393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.856455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.856698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.477 [2024-07-11 11:20:55.856771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.477 qpair failed and we were unable to recover it. 00:34:41.477 [2024-07-11 11:20:55.857031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.857096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.857339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.857405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.857689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.857771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.857999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.858293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.858378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.858683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.858792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.859063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.859129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.859349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.859414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.859674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.859739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.859983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.860050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.860309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.860374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.860587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.860654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.860967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.861035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.861294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.861362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.861569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.861950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.862032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.862319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.862391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.862635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.862703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.862964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.863032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.863249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.863319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.863529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.863596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.863914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.863984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.864291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.864358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.864582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.864653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.864913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.864985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.865216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.865287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.865485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.865557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.758 [2024-07-11 11:20:55.865847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.758 [2024-07-11 11:20:55.865920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.758 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.866141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.866214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.866495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.866562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.866816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.866885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.867109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.867179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.867423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.867493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.867768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.867836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.868067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.868137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.868355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.868426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.868636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.868705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.868964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.869037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.869276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.869347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.869646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.869716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.869962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.870034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.870348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.870419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.870638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.870719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.871003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.871074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.871351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.871419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.871684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.871775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.872011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.872080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.872331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.872401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.872613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.872680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.872940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.873009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.873239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.873311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.873618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.873686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.873919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.873993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.874235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.874302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.874529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.874601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.874835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.874905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.875178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.875245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.875503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.875574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.875800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.875872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.876080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.876154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.876405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.876473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.876688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.876789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.877068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.877136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.877417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.877487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.877720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.878038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.878108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.878419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.878489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.878733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.878820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.759 [2024-07-11 11:20:55.879081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.759 [2024-07-11 11:20:55.879151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.759 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.879418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.879498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.879717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.879802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.880105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.880175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.880395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.880465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.880703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.880801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.881067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.881138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.881394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.881462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.881741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.881829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.882054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.882124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.882388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.882457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.882781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.883122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.883189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.883464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.883535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.883825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.883899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.884177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.884527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.884597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.884876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.884948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.885167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.885238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.885496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.885568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.885820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.885890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.886158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.886228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.886439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.886510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.886788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.886860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.887090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.887161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.887419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.887488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.887767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.887838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.888090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.888160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.888370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.888441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.888685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.888779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.888994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.889066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.889304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.889373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.889648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.889720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.889970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.890040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.890293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.890365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.890665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.890736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.890995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.891062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.891256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.891328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.891593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.891662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.891916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.891987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.892220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.892290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.760 [2024-07-11 11:20:55.892561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.760 [2024-07-11 11:20:55.892631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.760 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.892870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.892951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.893220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.893292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.893591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.893662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.893945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.894017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.894262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.894331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.894592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.894664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.894961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.895034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.895348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.895417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.895649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.895715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.896032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.896099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.896338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.896403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.896638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.896703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.896937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.897003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.897246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.897312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.897525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.897591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.897814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.897881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.898171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.898236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.898456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.898522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.898776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.898842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.899058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.899124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.899361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.899425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.899680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.899744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.899966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.900030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.900240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.900304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.900515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.900581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.900864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.900931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.901179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.901243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.901536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.901611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.901868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.901936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.902228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.902293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.902517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.902584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.902823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.902890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.903096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.903169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.903384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.903449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.903700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.903781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.904024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.904090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.904383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.904448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.904740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.904817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.905050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.905121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.905382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.905454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.761 [2024-07-11 11:20:55.905738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.761 [2024-07-11 11:20:55.905838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.761 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.906126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.906199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.906507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.906594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.906921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.906968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.907138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.907174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.907319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.907355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.907503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.907541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.907660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.907695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.907853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.907893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.908039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.908075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.908230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.908270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.908442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.908481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.908636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.908671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.908802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.908840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.908986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.909032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.909192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.909228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.909372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.909407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.909558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.909597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.909732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.909778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.909913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.909949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.910073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.910109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.910262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.910301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.910474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.910514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.910666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.910705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.910884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.910920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.911040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.911075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.911225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.911260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.911367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.911404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.911581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.911621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.911769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.911805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.911929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.911964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.912079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.912114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.912297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.912336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.912459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.762 [2024-07-11 11:20:55.912494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.762 qpair failed and we were unable to recover it. 00:34:41.762 [2024-07-11 11:20:55.912647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.912682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.912813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.912853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.913026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.913061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.913171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.913206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.913353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.913390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.913565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.913600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.913774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.913815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.913935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.913970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.914153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.914193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.914294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.914329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.914510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.914545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.914653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.914689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.914788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.914825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.914927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.914962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.915096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.915131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.915281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.915321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.915477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.915514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.915662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.915697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.915893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.915928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.916066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.916101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.916196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.916231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.916381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.916415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.916544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.916728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.916791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.916916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.916952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.917099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.917135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.917269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.917304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.917453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.917488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.917630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.917664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.917817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.917854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.917953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.917988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.918156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.918190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.918325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.918359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.918488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.918522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.918667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.918708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.918860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.918895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.919068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.919103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.919233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.919267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.919409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.919443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.919592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.919627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.919766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.763 [2024-07-11 11:20:55.919811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.763 qpair failed and we were unable to recover it. 00:34:41.763 [2024-07-11 11:20:55.919948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.919983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.920137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.920173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.920319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.920354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.920531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.920568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.920711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.920745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.920931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.920984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.921166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.921202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.921353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.921387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.921536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.921571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.921717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.921760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.921909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.921943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.922157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.922222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.922419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.922480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.922617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.922652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.922808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.922843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.922997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.923052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.923181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.923245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.923384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.923531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.923565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.923733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.923784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.923955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.923995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.924099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.924133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.924305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.924340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.924526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.924602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.924973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.925019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.925248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.925285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.925421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.925473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.925738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.925826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.925975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.926045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.926363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.926441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.926637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.926672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.926818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.926855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.926976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.927021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.927130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.927181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.927310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.927345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.927537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.927604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.927833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.927868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.927987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.928021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.928325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.928384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.928583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.928643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.764 [2024-07-11 11:20:55.928834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.764 [2024-07-11 11:20:55.928869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.764 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.929007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.929041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.929186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.929220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.929324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.929357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.929488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.929564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.929787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.929826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.929967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.930013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.930225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.930294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.930478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.930534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.930664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.930698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.930845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.930880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.931000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.931075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.931254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.931314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.931624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.931684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.931908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.931943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.932094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.932157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.932393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.932427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.932551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.932586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.932707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.932743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.932944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.932978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.933170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.933204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.933328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.933362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.933460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.933494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.933609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.933643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.933773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.933808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.933924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.933984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.934161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.934207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.934359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.934404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.934589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.934635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.934857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.934903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.935056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.935101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.935319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.935367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.935572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.935620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.935837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.935887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.936076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.936132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.936337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.936396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.936643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.936703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.936956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.937007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.937157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.937231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.937469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.937529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.937777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.937830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.938031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.765 [2024-07-11 11:20:55.938082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.765 qpair failed and we were unable to recover it. 00:34:41.765 [2024-07-11 11:20:55.938314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.938390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.938614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.938665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.938877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.938930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.939153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.939205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.939406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.939457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.939650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.939702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.939939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.939993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.940241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.940293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.940459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.940511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.940706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.940784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.940985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.941038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.941246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.941307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.941587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.941646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.941904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.941962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.942216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.942276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.942512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.942571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.942862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.942923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.943149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.943202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.943420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.943476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.943745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.943815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.944072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.944133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.944383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.944463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.944674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.944729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.944957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.945013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.945343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.945421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.945691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.945750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.945974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.946034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.946268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.946332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.946597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.946657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.946864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.946926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.947216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.947294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.947526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.947585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.947798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.947860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.948130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.948210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.948477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.766 [2024-07-11 11:20:55.948536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.766 qpair failed and we were unable to recover it. 00:34:41.766 [2024-07-11 11:20:55.948807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.948869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.949124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.949202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.949484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.949545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.949782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.949843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.950030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.950090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.950393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.950471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.950745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.950824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.951058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.951134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.951395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.951472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.951674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.951735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.951998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.952074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.952315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.952393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.952682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.952742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.953029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.953106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.953367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.953445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.953718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.953794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.954102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.954179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.954472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.954549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.954853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.954917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.955243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.955321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.955594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.955654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.955865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.955927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.956230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.956308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.956604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.956681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.956937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.957016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.957293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.957366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.957550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.957610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.957870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.957949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.958245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.958322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.958550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.958611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.958882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.958960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.959242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.959320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.959549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.959609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.959904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.959981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.960293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.960372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.960608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.960668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.960944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.961022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.961274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.961351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.961628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.961688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.961978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.962058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.962356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.767 [2024-07-11 11:20:55.962433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.767 qpair failed and we were unable to recover it. 00:34:41.767 [2024-07-11 11:20:55.962707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.962784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.963018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.963105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.963398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.963475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.963787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.963848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.964128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.964191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.964493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.964570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.964784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.964844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.965102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.965179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.965441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.965519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.965769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.965831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.966117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.966177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.966473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.966558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.966846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.966907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.967165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.967244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.967533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.967610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.967891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.967968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.968241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.968317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.968578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.968655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.968950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.969028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.969315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.969393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.969660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.969720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.969994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.970073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.970343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.970421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.970629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.970689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.971003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.971082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.971380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.971457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.971719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.971815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.972074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.972153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.972456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.972534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.972780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.972841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.973109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.973169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.973446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.973523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.973766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.973830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.974075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.974134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.974362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.974439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.974643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.974704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.974945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.975008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.975277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.975338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.975601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.975670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.975958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.976037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.976357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.976435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.768 qpair failed and we were unable to recover it. 00:34:41.768 [2024-07-11 11:20:55.976719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.768 [2024-07-11 11:20:55.976811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.977117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.977194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.977491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.977568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.977804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.977866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.978103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.978163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.978412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.978489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.978719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.978792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.979066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.979127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.979428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.979507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.979741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.979814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.980071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.980147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.980419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.980497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.980732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.980806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.981100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.981177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.981387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.981464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.981663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.981723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.982033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.982112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.982379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.982456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.982736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.982809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.983052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.983112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.983374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.983451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.983691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.983764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.983965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.984028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.984279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.984357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.984616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.984695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.985036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.985124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.985425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.985503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.985786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.985848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.986129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.986189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.986495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.986571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.986851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.986913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.987170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.987248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.987466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.987543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.987836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.987915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.988150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.988210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.988467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.988543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.988841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.988920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.989142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.989220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.989496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.989557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.989794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.989857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.990130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.769 [2024-07-11 11:20:55.990190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.769 qpair failed and we were unable to recover it. 00:34:41.769 [2024-07-11 11:20:55.990472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.990549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.990858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.990938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.991257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.991334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.991602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.991661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.991973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.992051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.992306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.992382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.992623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.992682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.992989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.993051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.993358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.993436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.993716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.993794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.994067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.994128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.994432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.994511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.994793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.994855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.995157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.995235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.995524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.995602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.995805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.995866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.996084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.996162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.996435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.996495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.996775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.996835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.997100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.997180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.997490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.997568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.997837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.997898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.998215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.998297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.998535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.998612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.998868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.998956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.999207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.999285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.999590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:55.999666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:55.999950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.000012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.000320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.000399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.000683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.000743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.001073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.001151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.001444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.001522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.001773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.001834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.002132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.002209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.002473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.002550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.002849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.002912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.003132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.003193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.003443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.003520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.003779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.003840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.004045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.004122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.004390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.004467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.770 qpair failed and we were unable to recover it. 00:34:41.770 [2024-07-11 11:20:56.004734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.770 [2024-07-11 11:20:56.004822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.005055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.005118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.005417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.005495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.005722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.005797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.006073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.006133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.006430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.006506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.006783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.006844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.007148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.007227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.007490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.007549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.007867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.007929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.008242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.008328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.008550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.008626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.008926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.009004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.009271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.009334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.009604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.009665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.009980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.010058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.010341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.010402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.010638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.010698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.010927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.011007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.011298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.011375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.011659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.011719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.012120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.012412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.012489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.012784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.012845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.013159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.013239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.013543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.013619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.013894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.013957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.014261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.014338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.014601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.014974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.015034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.015329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.015409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.015606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.015668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.015977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.016057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.016311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.016388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.016628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.016687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.771 qpair failed and we were unable to recover it. 00:34:41.771 [2024-07-11 11:20:56.016962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.771 [2024-07-11 11:20:56.017040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.017329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.017406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.017681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.017741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.018023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.018102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.018337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.018413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.018637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.018697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.019011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.019089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.019391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.019466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.019739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.019812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.020078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.020155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.020415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.020491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.020726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.020811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.021080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.021157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.021452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.021529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.021799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.021861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.022167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.022243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.022555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.022633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.022863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.022926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.023226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.023305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.023578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.023654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.023938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.023998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.024248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.024326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.024632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.024707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.024987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.025066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.025323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.025400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.025669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.025729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.025994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.026071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.026364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.026442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.026671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.026730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.027002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.027082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.027370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.027449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.027645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.027705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.028013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.028098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.028338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.028416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.028656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.028716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.029041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.029119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.029413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.029492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.029699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.029777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.030033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.030111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.030357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.030436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.030680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.772 [2024-07-11 11:20:56.030740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.772 qpair failed and we were unable to recover it. 00:34:41.772 [2024-07-11 11:20:56.031070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.031148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.031450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.031529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.031792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.031863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.032125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.032185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.032483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.032560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.032846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.032908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.033204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.033281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.033593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.033673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.033994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.034073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.034378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.034455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.034688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.034747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.035007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.035084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.035325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.035401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.035679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.035739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.036061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.036441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.036519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.036801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.036864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.037181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.037259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.037533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.037611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.037846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.037906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.038205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.038283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.038590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.038666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.038955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.039016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.039248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.039595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.039655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.039959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.040037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.040337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.040414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.040692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.040769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.041041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.041120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.041424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.041510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.041744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.041823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.042069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.042146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.042428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.042506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.042780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.042841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.043146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.043225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.043487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.043565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.043844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.043906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.044203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.044282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.044584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.044662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.044910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.044973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.045261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.773 [2024-07-11 11:20:56.045339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.773 qpair failed and we were unable to recover it. 00:34:41.773 [2024-07-11 11:20:56.045628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.045706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.045987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.046065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.046374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.046451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.046717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.046791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.047093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.047171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.047452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.047512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.047782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.047844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.048098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.048178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.048481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.048558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.048849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.048910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.049129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.049209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.049515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.049592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.049889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.049968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.050182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.050262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.050527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.050605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.050903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.050990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.051242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.051319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.051549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.051609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.051832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.051914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.052171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.052249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.052524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.052584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.052786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.052847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.053094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.053171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.053447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.053523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.053833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.053913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.054181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.054258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.054473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.054550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.054841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.054920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.055217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.055294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.055561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.055621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.055871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.055951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.056175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.056252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.056476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.056553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.056845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.056925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.057229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.057306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.057554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.057614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.057862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.057942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.058215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.058293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.058560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.058620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.058911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.058991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.059294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.059372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.774 qpair failed and we were unable to recover it. 00:34:41.774 [2024-07-11 11:20:56.059606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.774 [2024-07-11 11:20:56.059666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.059951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.060030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.060332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.060412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.060646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.060705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.060998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.061078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.061340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.061418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.061687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.061748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.062066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.062143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.062362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.062440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.062683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.062743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.063079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.063434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.063497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.063748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.063825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.064109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.064170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.064445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.064506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.064681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.064751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.065072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.065150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.065423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.065501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.065725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.065802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.066103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.066180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.066487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.066564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.066841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.066903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.067199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.067277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.067567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.067644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.067905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.067983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.068277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.068354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.068624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.068685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.069016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.069097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.069350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.069429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.069709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.069785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.070058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.070135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.070424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.070500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.070774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.070836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.071062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.071142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.775 [2024-07-11 11:20:56.071342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.775 [2024-07-11 11:20:56.071420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.775 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.071650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.071711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.072001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.072061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.072358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.072438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.072669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.072729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.073006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.073067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.073366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.073444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.073713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.073790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.074061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.074147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.074456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.074533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.074797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.074859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.075174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.075252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.075547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.075624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.075904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.075964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.076213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.076581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.076660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.076980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.077059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.077320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.077382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.077572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.077633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.077925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.078005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.078261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.078339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.078610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.078671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.078913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.078990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.079280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.079358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.079600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.079661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.079972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.080051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.080308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.080386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.080589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.080648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.080920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.080984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.081256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.081334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.081598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.081658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.081966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.082045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.082294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.082372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.082564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.082624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.082831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.082893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.083188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.083276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.083487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.083546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.083796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.083859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.084110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.084186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.084492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.084570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.084860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.084939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.776 [2024-07-11 11:20:56.085241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.776 [2024-07-11 11:20:56.085319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.776 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.085551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.085612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.085910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.085990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.086281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.086358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.086591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.086653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.086922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.087003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.087257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.087604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.087956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.088019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.088338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.088417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.088696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.088768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.089047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.089125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.089365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.089443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.089720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.089793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.090007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.090086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.090396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.090472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.090704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.090780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.091047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.091126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.091422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.091499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.091780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.091842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.092082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.092144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.092447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.092526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.092815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.092878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.093154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.093216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.093512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.093589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.093828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.093890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.094140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.094218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.094485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.094563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.094860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.094940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.095209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.095285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.095533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.095611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.095890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.095970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.096220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.096297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.096569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.096630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.096882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.096944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.097260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.097338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.097611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.097671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.097998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.098077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.098381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.098458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.098729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.098804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.099091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.099168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.099471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.777 [2024-07-11 11:20:56.099548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.777 qpair failed and we were unable to recover it. 00:34:41.777 [2024-07-11 11:20:56.099870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.099932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.100231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.100309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.100612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.100690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.101003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.101082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.101339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.101415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.101655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.101717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.102038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.102116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.102433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.102510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.102794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.102856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.103169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.103246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.103495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.103574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.103906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.104131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.104209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.104510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.104586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.104839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.104920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.105163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.105224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.105503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.105564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.105792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.105853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.106119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.106196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.106488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.106565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.106799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.106866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.107117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.107192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.107462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.107538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.107784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.107846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.108111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.108187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.108439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.108515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.108792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.109072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.109152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.109409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.109486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.109768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.109829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.110052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.110112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.110376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.110453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.110726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.110802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.111047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.111107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.111325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.111402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.111639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.111699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.111954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.112015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.112248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.112309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.112578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.112656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.112979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.113057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.113312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.778 [2024-07-11 11:20:56.113389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.778 qpair failed and we were unable to recover it. 00:34:41.778 [2024-07-11 11:20:56.113652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.113712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.114018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.114096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.114382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.114459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.114689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.114748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.115011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.115074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.115360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.115437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.115709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.115795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.116096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.116174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.116464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.116542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.116866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.116927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.117198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.117276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.117520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.117598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.117859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.117937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.118175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.118237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.118496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.118557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.118836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.118898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.119135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.119195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.119406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.119466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.119703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.119776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.120048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.120107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.120429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.120507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.120767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.120829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.121015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.121101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.121366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.121443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.121637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.121698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.121991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.122071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.122372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.122449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.122716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.122788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.123016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.123094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.123391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.123469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.123744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.123817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.124014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.124075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.779 [2024-07-11 11:20:56.124318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.779 [2024-07-11 11:20:56.124396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.779 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.124631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.124699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.124985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.125331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.125409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.125692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.125770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.126039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.126118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.126329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.126408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.126634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.126696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.127015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.127094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.127394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.127470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.127740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.127817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.128082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.128159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.128442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.128502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.128782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.128843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.129115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.129175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.129443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.129520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.129733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.129808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.130091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.130150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.130348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.130426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.130703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.130779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.131016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.131077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.131335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.131413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.131686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.131747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.132002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.132061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.132326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.132387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.132588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.132648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.132865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.133263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.133341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.133618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.133678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.133962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.134041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.134303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.134380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.134613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.134671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.134953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.135032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.135348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.135427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.135712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.135788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.136046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.136131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.136451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.136528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.136793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.136854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.137144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.137223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.137438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.137514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.137743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.137815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.138085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.780 [2024-07-11 11:20:56.138145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.780 qpair failed and we were unable to recover it. 00:34:41.780 [2024-07-11 11:20:56.138345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.138431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.138663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.138723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.138995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.139073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.139330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.139407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.139640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.139700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.140024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.140101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.140318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.140398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.140608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.140669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.140962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.141042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.141345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.141422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.141695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.141769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.142080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.142158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.142433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.142510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.142766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.142828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.143143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.143220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.143486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.143563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.143776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.143838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.144105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.144184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.144463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.144541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.144851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.144913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.145160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.145238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.145541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.145619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.145922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.146001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.146263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.146338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.146563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.146625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.146917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.146979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.147275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.147352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.147620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.147689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.147991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.148071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.148360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.148421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.148647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.148708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.148988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.149069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.149318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.149395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.149687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.149748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.150066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.150144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.150353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.150430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.150693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.150768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.151111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.151189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.151455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.151532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.151707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.151784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.152039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.152116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.152422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.152499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.152728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.152806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.153064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.153140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.153395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.781 [2024-07-11 11:20:56.153471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.781 qpair failed and we were unable to recover it. 00:34:41.781 [2024-07-11 11:20:56.153672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.153732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.154052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.154131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.154441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.154518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.154717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.154794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.155056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.155133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.155437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.155513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.155808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.155870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.156129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.156207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.156499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.156575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.156845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.156916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.157221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.157301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.157595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.157673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.157873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.157936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.158196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.158274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.158590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.158669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.158972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.159052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.159352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.159429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.159656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.159717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.160041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.160121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.160369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.160446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.160683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.160742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.161063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.161141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.161455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.161532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.161843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.161921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.162159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.162236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.162528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.162606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.162859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.162937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.163249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.163327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.166769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.166819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.167048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.167100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.167237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.167296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.167478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.167529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.167641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.167668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.167829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.167895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.168132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.168184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:41.782 [2024-07-11 11:20:56.168375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:41.782 [2024-07-11 11:20:56.168429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:41.782 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.168529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.168557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.168675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.168703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.168923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.168976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.169150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.169204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.169394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.169447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.169537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.169566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.169677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.169705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.169926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.169992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.170150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.170209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.170384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.067 [2024-07-11 11:20:56.170444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.067 qpair failed and we were unable to recover it. 00:34:42.067 [2024-07-11 11:20:56.170564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.170592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.170710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.170738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.170898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.170955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.171119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.171147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.171272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.171300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.171432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.171460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.171603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.171631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.171758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.171786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.171906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.171934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.172867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.172894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.173928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.173955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.174048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.174076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.174241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.174268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.174411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.174438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.174555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.174582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.174664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.174692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.174841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.174869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.175089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.175270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.175414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.175560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.175705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.175825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.175964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.176019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.176212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.176240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.176358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.176386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.176529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.176557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.176670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.068 [2024-07-11 11:20:56.176697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.068 qpair failed and we were unable to recover it. 00:34:42.068 [2024-07-11 11:20:56.176875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.176933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.177941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.177969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.178873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.178901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.179870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.179979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.180882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.180908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.181918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.181943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.182026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.182052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.182141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.182166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.182257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.182282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.069 qpair failed and we were unable to recover it. 00:34:42.069 [2024-07-11 11:20:56.182361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.069 [2024-07-11 11:20:56.182387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.182485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.182510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.182602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.182628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.182715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.182745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.182874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.182899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.182992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.183910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.183936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.184898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.184925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.185896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.185923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.186853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.186878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.070 qpair failed and we were unable to recover it. 00:34:42.070 [2024-07-11 11:20:56.187781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.070 [2024-07-11 11:20:56.187820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.187935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.187960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.188886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.188974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.189960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.189985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.190916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.190941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.191934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.191960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.192052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.192078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.192195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.192221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.192310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.192335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.192428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.071 [2024-07-11 11:20:56.192454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.071 qpair failed and we were unable to recover it. 00:34:42.071 [2024-07-11 11:20:56.192573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.192598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.192709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.192734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.192847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.192872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.192962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.192994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.193888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.193913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.194968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.194996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.195049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21aa5b0 (9): Bad file descriptor 00:34:42.072 [2024-07-11 11:20:56.195270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.195335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.195488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.195542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.195678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.195709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.195834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.195869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.196051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.196098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.196247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.196297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.196456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.196508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.196628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.196656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.196740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.196772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.196908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.196953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.197137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.197342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.197483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.197614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.197764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.072 [2024-07-11 11:20:56.197884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.072 qpair failed and we were unable to recover it. 00:34:42.072 [2024-07-11 11:20:56.197968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.197995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.198882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.198999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.199164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.199273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.199427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.199600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.199758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.199885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.199918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.200092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.200141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.200259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.200308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.200495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.200546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.200661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.200687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.200780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.200810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.200924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.200952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.201098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.201128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.201223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.201252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.201404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.201440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.201540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.201575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.201683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.201711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.201881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.201909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.202924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.202951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.203124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.203238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.203390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.203550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.203708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.203865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.203983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.073 [2024-07-11 11:20:56.204011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.073 qpair failed and we were unable to recover it. 00:34:42.073 [2024-07-11 11:20:56.204133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.204271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.204399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.204523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.204653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.204831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.204944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.204972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.205094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.205137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.205242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.205271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.205389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.205419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.205553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.205584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.205690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.205895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.205924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.206902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.206929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.207032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.207061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.207243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.207287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.207428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.207458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.207588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.207621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.207734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.207769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.207870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.207896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.208938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.208965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.074 qpair failed and we were unable to recover it. 00:34:42.074 [2024-07-11 11:20:56.209967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.074 [2024-07-11 11:20:56.209994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.210151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.210178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.210330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.210357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.210472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.210500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.210617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.210645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.210772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.210817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.210937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.210965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.211969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.211997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.212183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.212228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.212328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.212354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.212506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.212550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.212656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.212682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.212770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.212805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.212898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.212925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.213885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.213975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.214883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.214910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.075 [2024-07-11 11:20:56.215764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.075 [2024-07-11 11:20:56.215801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.075 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.215898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.215924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.216902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.216994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.217896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.217924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.218952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.218981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.219108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.219138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.219295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.219343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.219439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.219467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.219604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.219632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.219758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.219797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.219925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.219953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.220105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.220155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.220296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.220343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.220483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.220529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.220625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.220650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.220766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.220801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.076 [2024-07-11 11:20:56.220961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.076 [2024-07-11 11:20:56.221006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.076 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.221192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.221332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.221471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.221637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.221748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.221876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.221989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.222906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.222936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.223885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.223910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.224917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.224944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.225903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.225931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.226041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.226070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.226187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.226215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.226310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.226337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.077 [2024-07-11 11:20:56.226442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.077 [2024-07-11 11:20:56.226477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.077 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.226606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.226634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.226796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.226826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.226928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.226954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.227942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.227968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.228946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.228974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.229939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.229967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.230919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.230948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.231909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.231939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.078 [2024-07-11 11:20:56.232068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.078 [2024-07-11 11:20:56.232098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.078 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.232225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.232256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.232411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.232456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.232572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.232600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.232685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.232711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.232807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.232834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.232929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.232957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.233878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.233971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.234954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.234982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.235937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.235964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.236914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.236940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.237049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.237076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.237160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.237185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.237264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.237290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.237373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.237401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.079 [2024-07-11 11:20:56.237543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.079 [2024-07-11 11:20:56.237572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.079 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.237657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.237683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.237775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.237813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.237902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.237930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.238885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.238979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.239957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.239985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.240957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.240984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.080 [2024-07-11 11:20:56.241907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.080 [2024-07-11 11:20:56.241935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.080 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.242926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.242952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.243889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.243916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.244946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.244971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.245972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.245999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.246872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.246896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.247008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.247035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.247116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.247143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.081 [2024-07-11 11:20:56.247232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.081 [2024-07-11 11:20:56.247259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.081 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.247349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.247376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.247464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.247491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.247610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.247637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.247726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.247759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.247883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.247910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.248895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.248923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.249962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.249989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.250895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.250922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.251965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.251992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.252090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.252117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.252204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.252232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.252320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.252347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.252467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.252495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.082 [2024-07-11 11:20:56.252626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.082 [2024-07-11 11:20:56.252666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.082 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.252749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.252786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.252874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.252901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.252999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.253895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.253922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.254912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.254938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.255922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.255948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.256971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.256998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.257973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.257998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.083 [2024-07-11 11:20:56.258120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.083 [2024-07-11 11:20:56.258145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.083 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.258231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.258257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.258345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.258372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.258463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.258489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.258607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.258633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.258727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.258765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.258881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.258907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.259865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.259895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.260891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.260918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.261911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.261938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.084 [2024-07-11 11:20:56.262833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.084 [2024-07-11 11:20:56.262862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.084 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.262951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.262978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.263910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.263938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.264906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.264931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.265967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.265994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.266930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.266958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.267885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.267912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.268025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.268052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.268132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.268160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.268272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.085 qpair failed and we were unable to recover it. 00:34:42.085 [2024-07-11 11:20:56.268441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.085 [2024-07-11 11:20:56.268467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.268611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.268638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.268764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.268794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.268902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.268928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.269887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.269913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.270867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.270983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.271920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.271948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.272872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.272899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.086 [2024-07-11 11:20:56.273802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.086 qpair failed and we were unable to recover it. 00:34:42.086 [2024-07-11 11:20:56.273940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.273965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.274933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.274965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.275890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.275930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.276871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.276899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.277879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.277907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.278902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.278979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.279005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.279121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.087 [2024-07-11 11:20:56.279148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-11 11:20:56.279237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.279263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.279372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.279401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.279489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.279517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.279595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.279623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.279764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.279793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.279919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.279959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.280952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.280978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.281912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.281939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.282887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.282914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.283925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.283951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.284029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.284060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.284175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.284202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.284319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.088 [2024-07-11 11:20:56.284346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-11 11:20:56.284489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.284519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.284606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.284633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.284779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.284807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.284917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.284945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.285949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.285975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.286940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.286966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.287907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.287934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.089 [2024-07-11 11:20:56.288838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-11 11:20:56.288932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.288960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.289897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.289925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.290872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.290911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.291851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.291878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.292862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.292980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.293869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.293997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.294026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.294119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.294146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.294263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.294290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.294428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.294455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.090 qpair failed and we were unable to recover it. 00:34:42.090 [2024-07-11 11:20:56.294574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.090 [2024-07-11 11:20:56.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.294716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.294743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.294845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.294871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.294987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.295135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.295296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.295438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.295574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.295758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.295923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.295952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.296873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.296979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.297906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.297934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.298857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.298974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.299908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.299996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.091 [2024-07-11 11:20:56.300023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.091 qpair failed and we were unable to recover it. 00:34:42.091 [2024-07-11 11:20:56.300136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.300275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.300391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.300514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.300653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.300805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.300946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.300972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.301833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.301975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.302920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.302948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.303863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.303902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.304885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.304924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.305025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.305053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.305169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.305197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.305338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.305365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.305492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.305519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.092 qpair failed and we were unable to recover it. 00:34:42.092 [2024-07-11 11:20:56.305633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.092 [2024-07-11 11:20:56.305661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.305792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.305833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.305932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.305961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.306925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.306953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.307899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.307984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.308843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.308984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.309154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.309287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.309431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.309598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.309764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.309909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.309934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.310056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.310082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.310221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.310246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.093 [2024-07-11 11:20:56.310333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.093 [2024-07-11 11:20:56.310359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.093 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.310498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.310523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.310610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.310635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.310764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.310790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.310885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.310910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.310995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.311914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.311938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.312917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.312944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.313956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.313981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.314929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.314960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.315052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.315079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.315162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.315191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.315278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.315306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.094 qpair failed and we were unable to recover it. 00:34:42.094 [2024-07-11 11:20:56.315416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.094 [2024-07-11 11:20:56.315443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.315562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.315589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.315670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.315697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.315813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.315841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.315922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.315949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.316921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.316949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.317860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.317893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.318892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.318919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.319899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.319926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.320043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.320148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.320268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.320370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.320502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.095 [2024-07-11 11:20:56.320671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.095 qpair failed and we were unable to recover it. 00:34:42.095 [2024-07-11 11:20:56.320793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.320823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.320926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.320954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.321920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.321948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.322952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.322978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.323893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.323923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.324888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.324979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.325892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.325920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.326034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.326061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.326143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.096 [2024-07-11 11:20:56.326170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.096 qpair failed and we were unable to recover it. 00:34:42.096 [2024-07-11 11:20:56.326261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.326292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.326405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.326434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.326520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.326547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.326765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.326793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.326888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.326916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.327961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.327988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.328903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.328930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.329876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.329903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.330917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.330946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.331033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.331061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.331174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.331200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.331294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.331334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.097 qpair failed and we were unable to recover it. 00:34:42.097 [2024-07-11 11:20:56.331456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.097 [2024-07-11 11:20:56.331485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.331602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.331631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.331723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.331751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.331871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.331899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.332963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.332991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.333935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.333964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.334892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.334918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.335931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.335959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.336072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.098 [2024-07-11 11:20:56.336098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.098 qpair failed and we were unable to recover it. 00:34:42.098 [2024-07-11 11:20:56.336209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.336326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.336440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.336577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.336722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.336846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.336961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.336988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.337961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.337993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.338899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.338982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.339905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.339932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.340961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.340988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.341131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.341158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.341244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.341272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.341357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.099 [2024-07-11 11:20:56.341384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.099 qpair failed and we were unable to recover it. 00:34:42.099 [2024-07-11 11:20:56.341472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.341500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.341588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.341615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.341695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.341722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.341864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.341891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.341977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.342865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.342897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.343971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.343998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.344915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.344997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.345943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.345970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.346081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.346109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.346194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.346221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.346304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.346332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.346416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.346444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.100 [2024-07-11 11:20:56.346588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.100 [2024-07-11 11:20:56.346615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.100 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.346707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.346734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.346884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.346912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.347891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.347918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.348938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.348964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.349899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.349925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.350899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.350927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.351006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.351033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.351117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.351145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.351236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.351263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.351378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.351405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.351491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.351518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.101 [2024-07-11 11:20:56.351619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.101 [2024-07-11 11:20:56.351659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.101 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.351807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.351836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.351952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.351979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.352893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.352974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.353971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.353997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.354136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.354162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.354273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.354300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.354440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.354466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.354594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.354634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.354733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.354776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.354889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.354916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.355900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.355981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.356007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.356093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.102 [2024-07-11 11:20:56.356119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.102 qpair failed and we were unable to recover it. 00:34:42.102 [2024-07-11 11:20:56.356235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.356261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.356342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.356370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.356458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.356485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.356639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.356679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.356777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.356805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.356920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.356948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.357860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.357888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.358945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.358971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.359870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.359897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.360955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.360981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.361096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.361122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.361263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.361289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.361378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.361405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.361544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.361572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.361699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.103 [2024-07-11 11:20:56.361727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.103 qpair failed and we were unable to recover it. 00:34:42.103 [2024-07-11 11:20:56.361889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.361916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.362940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.362965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.363895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.363922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.364882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.364994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.365866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.365979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.366006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.366090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.366121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.366236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.366264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.366353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.366380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.366497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.366526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.367349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.367381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.367473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.367500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.367616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.367643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.367767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.367795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.367881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.367907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.104 [2024-07-11 11:20:56.368003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.104 [2024-07-11 11:20:56.368029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.104 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.368914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.368941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.369959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.369987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.370898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.370924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.371893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.371986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.372127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.372296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.372462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.372600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.372789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.372952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.372980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.373095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.373122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.373239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.373266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.373378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.373404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.373486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.373513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.105 qpair failed and we were unable to recover it. 00:34:42.105 [2024-07-11 11:20:56.373634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.105 [2024-07-11 11:20:56.373661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.373772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.373800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.373880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.373906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.374916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.374944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.375917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.375944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.376877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.376904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.377890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.377991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.378017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.378108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.378135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.378247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.378274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.106 qpair failed and we were unable to recover it. 00:34:42.106 [2024-07-11 11:20:56.378367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.106 [2024-07-11 11:20:56.378394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.378496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.378537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.378663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.378691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.378801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.378828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.378918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.378945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.379060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.379242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.379356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.379546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.379717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.379880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.379990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.380914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.380941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.381958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.381985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.382913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.382997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.383112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.383257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.383366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.383504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.383649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.107 qpair failed and we were unable to recover it. 00:34:42.107 [2024-07-11 11:20:56.383778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.107 [2024-07-11 11:20:56.383818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.383939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.383967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.384896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.384986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.385896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.385921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.386877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.386985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.387878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.387993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.388946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.388971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.389059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.108 [2024-07-11 11:20:56.389083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.108 qpair failed and we were unable to recover it. 00:34:42.108 [2024-07-11 11:20:56.389202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.389315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.389427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.389538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.389648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.389788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.389907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.389932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.390865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.390894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.391903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.391930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.392914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.392995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.393906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.393932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.394044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.394070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.394188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.109 [2024-07-11 11:20:56.394214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.109 qpair failed and we were unable to recover it. 00:34:42.109 [2024-07-11 11:20:56.394299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.394325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.394438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.394464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.394549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.394579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.394667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.394694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.394773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.394799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.394895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.394922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.395949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.395976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.396971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.396998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.397908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.397994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.398020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.398104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.398130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.398215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.398241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.398328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.398355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.110 [2024-07-11 11:20:56.398447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.110 [2024-07-11 11:20:56.398472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.110 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.398552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.398577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.398662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.398693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.398788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.398816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.398902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.398929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.399899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.399926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.400888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.400974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.401000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.401079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.401106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.401192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.401219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.401335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.401362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.401440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.401467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.401554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.111 [2024-07-11 11:20:56.401581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.111 qpair failed and we were unable to recover it. 00:34:42.111 [2024-07-11 11:20:56.401693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.401720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.401843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.401873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.401972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.402932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.402961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.403885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.403975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.404955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.404983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.405066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.405094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.112 [2024-07-11 11:20:56.405213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.112 [2024-07-11 11:20:56.405241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.112 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.405362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.405391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.405509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.405536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.405626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.405654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.405772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.405801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.405917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.405944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.406973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.406999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.407897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.407925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.408882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.408974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.409881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.409995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.410022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.410110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.410137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.410218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.410245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.410355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.113 [2024-07-11 11:20:56.410468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.113 [2024-07-11 11:20:56.410494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.113 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.410614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.410642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.410745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.410778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.410897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.410923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.411853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.411880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.412945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.412973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.413909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.413937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.414971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.414997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.415137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.415251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.415396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.415501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.415624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.114 [2024-07-11 11:20:56.415777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.114 qpair failed and we were unable to recover it. 00:34:42.114 [2024-07-11 11:20:56.415895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.415922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.416929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.416957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.417904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.417931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.418885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.418913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.419861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.419978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.420891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.420918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.421000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.421026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.115 [2024-07-11 11:20:56.421139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.115 [2024-07-11 11:20:56.421167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.115 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.421253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.421279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.421358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.421385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.421499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.421526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.421647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.421674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.421795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.421825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.421913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.421941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.422863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.422891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.423927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.423954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.424881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.424908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.425020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.425047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.425185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.425212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.425301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.425327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.425434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.425475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.116 [2024-07-11 11:20:56.425583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.116 [2024-07-11 11:20:56.425624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.116 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.425718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.425746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.425848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.425875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.425967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.425994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.426937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.426966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.427883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.427911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.428946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.428973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.429929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.429956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.430896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.430923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.117 [2024-07-11 11:20:56.431053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.117 [2024-07-11 11:20:56.431079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.117 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.431948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.431974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.432955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.432982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.433966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.433992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.434888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.434999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.435968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.435996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.436112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.436139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.436229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.436256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.436344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.436371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.118 qpair failed and we were unable to recover it. 00:34:42.118 [2024-07-11 11:20:56.436452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.118 [2024-07-11 11:20:56.436478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.436558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.436585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.436714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.436741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.436838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.436866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.437904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.437932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.438962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.438989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.439141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.439168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.439278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.439305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.439451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.439478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.439594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.439621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.439744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.439783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.439874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.439900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.440934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.440961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.441902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.441988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.442016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.442124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.119 [2024-07-11 11:20:56.442151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.119 qpair failed and we were unable to recover it. 00:34:42.119 [2024-07-11 11:20:56.442261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.442288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.442416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.442443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.442537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.442577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.442675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.442704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.442839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.442868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.442979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.443848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.443991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.444875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.444982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.445892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.445982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.446840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.446982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.447009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.447123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.120 [2024-07-11 11:20:56.447150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.120 qpair failed and we were unable to recover it. 00:34:42.120 [2024-07-11 11:20:56.447263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.447290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.447384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.447413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.447502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.447529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.447644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.447671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.447789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.447817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.447904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.447931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.448862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.448889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.449895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.449923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.450903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.450993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.451908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.451934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.452023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.452050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.452203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.452232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.452356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.452385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.452503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.452530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.452648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.452675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.121 [2024-07-11 11:20:56.452769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.121 [2024-07-11 11:20:56.452797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.121 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.452939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.452966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.453969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.453995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.454133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.454241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.454385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.454530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.454697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.454843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.454978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.455877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.455904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.456890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.456918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.457963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.457990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.458074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.458101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.458197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.458224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.122 qpair failed and we were unable to recover it. 00:34:42.122 [2024-07-11 11:20:56.458333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.122 [2024-07-11 11:20:56.458359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.458488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.458528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.458647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.458676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.458763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.458791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.458906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.458933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.459911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.459993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.460905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.460984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.461966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.461993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.462879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.462907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.463072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.463221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.463361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.463529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.463646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.123 [2024-07-11 11:20:56.463792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.123 qpair failed and we were unable to recover it. 00:34:42.123 [2024-07-11 11:20:56.463905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.463931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.464871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.464996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.465137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.465252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.465378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.465492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.465623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.124 [2024-07-11 11:20:56.465766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.124 [2024-07-11 11:20:56.465796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.124 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-11 11:20:56.465895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-11 11:20:56.465936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-11 11:20:56.466067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-11 11:20:56.466108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-11 11:20:56.466265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.407 [2024-07-11 11:20:56.466306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.407 qpair failed and we were unable to recover it. 00:34:42.407 [2024-07-11 11:20:56.466428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.466457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.466598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.466626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.466767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.466908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.466936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.467946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.467973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.468836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.468986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.469944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.469971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.470875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.470991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.471018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.471095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.471122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.471206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.471238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.471352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.471380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.471493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.471521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.471655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.408 [2024-07-11 11:20:56.471683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.408 qpair failed and we were unable to recover it. 00:34:42.408 [2024-07-11 11:20:56.471795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.471822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.471934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.471961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.472922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.472950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.473887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.473916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.474849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.474876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.475863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.475890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.476884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.476980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.477008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.477124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.477152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.477238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.409 [2024-07-11 11:20:56.477265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.409 qpair failed and we were unable to recover it. 00:34:42.409 [2024-07-11 11:20:56.477349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.477376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.477496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.477525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.477644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.477672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.477794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.477825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.477916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.477944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.478919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.478949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.479904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.479932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.480867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.481937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.481964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.410 [2024-07-11 11:20:56.482852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.410 [2024-07-11 11:20:56.482881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.410 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.482993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.483132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.483268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.483433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.483538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.483678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.483872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.483912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.484882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.484977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.485881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.485908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.486949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.486975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.487865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.488007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.488033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.488139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.488165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.488303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.488329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.488450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.411 [2024-07-11 11:20:56.488477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.411 qpair failed and we were unable to recover it. 00:34:42.411 [2024-07-11 11:20:56.488606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.488646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.488784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.488824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.489851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.489883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.490944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.490972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.491962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.491988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.492936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.492963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.493043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.493070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.493156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.493184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.493294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.493322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.412 qpair failed and we were unable to recover it. 00:34:42.412 [2024-07-11 11:20:56.493463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.412 [2024-07-11 11:20:56.493490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.493582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.493610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.493700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.493732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.493852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.493881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.494863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.494890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.495952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.495980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.496960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.496987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.497107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.497133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.497247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.497273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.497421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.497448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.497524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.497551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.497665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.497692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.497834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.497863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.498923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.498951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.499085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.413 [2024-07-11 11:20:56.499112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.413 qpair failed and we were unable to recover it. 00:34:42.413 [2024-07-11 11:20:56.499231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.499258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.499395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.499426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.499540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.499566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.499657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.499684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.499799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.499828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.499920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.499947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.500891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.500918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.501926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.501953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.502894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.502921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.503913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.503940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.504053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.504081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.504171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.504199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.504314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.504341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.504456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.504483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.504602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.504631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.414 qpair failed and we were unable to recover it. 00:34:42.414 [2024-07-11 11:20:56.504770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.414 [2024-07-11 11:20:56.504799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.504910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.504943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.505892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.505978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.506873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.506986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.507943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.507970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.508925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.508953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.509963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.509990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.510096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.510123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.415 [2024-07-11 11:20:56.510234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.415 [2024-07-11 11:20:56.510262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.415 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.510380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.510407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.510524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.510553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.510668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.510695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.510788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.510815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.510899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.510925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.511896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.511987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.512875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.512902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.513928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.513956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.416 [2024-07-11 11:20:56.514915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.416 [2024-07-11 11:20:56.514943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.416 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.515947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.515973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.516902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.516929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.517881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.517915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.518890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.518999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.519943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.519970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.520076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.520103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.520219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.520246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.520339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.520370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.520456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.417 [2024-07-11 11:20:56.520484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.417 qpair failed and we were unable to recover it. 00:34:42.417 [2024-07-11 11:20:56.520626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.520653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.520795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.520822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.520961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.520988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.521918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.521945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.522886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.522995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.523140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.523283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.523401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.523519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.523671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.523866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.523896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.524925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.524954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.525864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.525980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.526008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.526119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.526146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.418 [2024-07-11 11:20:56.526293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.418 [2024-07-11 11:20:56.526320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.418 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.526409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.526436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.526564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.526604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.526806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.526835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.526954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.526983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.527887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.527916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.528889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.528994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.529958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.529987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.530925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.530956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.419 [2024-07-11 11:20:56.531951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.419 [2024-07-11 11:20:56.531982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.419 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.532888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.532973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.533886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.533975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.534112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.534257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.534407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.534536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.534702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.534874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.534902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.535071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.535217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.535360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.535481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.535879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.535998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.420 [2024-07-11 11:20:56.536861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.420 qpair failed and we were unable to recover it. 00:34:42.420 [2024-07-11 11:20:56.536946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.536972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.537895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.537922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.538010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.538037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.538177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.538205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.538351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.538378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.538476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.538504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.538701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.538730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.538864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.538904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.539939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.539968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.540857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.540885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.541913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.541941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.542058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.542086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.542197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.542224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.542356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.542383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.542503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.542531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.542609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.542636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.421 [2024-07-11 11:20:56.542720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.421 [2024-07-11 11:20:56.542746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.421 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.542893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.542921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.543923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.543951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.544875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.544985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.545954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.545980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.546942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.546969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.547094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.547121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.547206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.547234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.547427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.547454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.547574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.547602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.547742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.547775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.547889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.547916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.548009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.548036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.548129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.548157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.548270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.548297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.548383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.548413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.422 [2024-07-11 11:20:56.548513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.422 [2024-07-11 11:20:56.548553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.422 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.548686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.548725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.548849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.548877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.548996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.549165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.549335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.549479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.549640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.549778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.549896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.549923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.550856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.550882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.551883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.551910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.552972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.552998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.553909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.553936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.554047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.423 [2024-07-11 11:20:56.554074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.423 qpair failed and we were unable to recover it. 00:34:42.423 [2024-07-11 11:20:56.554161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.554296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.554437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.554547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.554712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.554833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.554971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.554998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.555931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.555958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.556879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.556981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.557924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.557951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.558870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.558910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.559033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.559066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.559182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.559209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.559330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.559357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.559502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.559530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.559614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.424 [2024-07-11 11:20:56.559641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.424 qpair failed and we were unable to recover it. 00:34:42.424 [2024-07-11 11:20:56.559809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.559850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.559973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.560943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.560970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.561959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.561987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.562966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.562995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.563166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.563308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.563454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.563590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.563764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.563906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.563985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.564012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.564163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.564203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.564334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.564363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.564442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.564469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.564561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.564588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.425 [2024-07-11 11:20:56.564704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.425 [2024-07-11 11:20:56.564731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.425 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.564828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.564861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.564981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.565875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.565901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.566843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.566872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.567957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.567984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.568074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.568101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.568186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.568213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.568365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.568397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.568545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.568585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.568718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.568766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.568919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.568949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.569923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.569961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.570057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.570083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.570167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.570194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.570300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.570326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.426 [2024-07-11 11:20:56.570440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.426 [2024-07-11 11:20:56.570466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.426 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.570592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.570623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.570741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.570774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.570864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.570892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.571896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.571924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.572906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.572934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.573899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.573926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.574888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.574928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.575920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.575947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.576037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.576064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.576178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.576205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.427 [2024-07-11 11:20:56.576323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.427 [2024-07-11 11:20:56.576350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.427 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.576442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.576470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.576582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.576609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.576691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.576721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.576845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.576874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.576988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.577157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.577331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.577497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.577638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.577797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.577957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.577985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.578964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.578992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.579237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.579266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.579377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.579404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.579542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.579568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.579657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.579684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.579770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.579797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.579909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.579935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.580882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.580997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.581876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.581902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.582007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.582037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.428 [2024-07-11 11:20:56.582179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.428 [2024-07-11 11:20:56.582206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.428 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.582302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.582329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.582411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.582438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.582552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.582578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.582683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.582711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.582824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.582852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.582943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.582970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.583081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.583107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.583187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.583213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.583313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.583355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.583505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.583545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.583692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.583722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.583881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.583914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.584892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.584918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.585367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.585396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.585518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.585545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.585637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.585665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.585783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.585811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.585902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.585929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.586962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.586989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.587103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.587130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.587263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.587305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.429 [2024-07-11 11:20:56.587428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.429 [2024-07-11 11:20:56.587457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.429 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.587599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.587626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.587764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.587801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.587895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.587923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.588899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.588994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.589860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.589977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.590809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.590963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.591946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.591973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.592944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.592972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.593099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.593127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.430 [2024-07-11 11:20:56.593265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.430 [2024-07-11 11:20:56.593292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.430 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.593390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.593419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.593531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.593559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.593656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.593686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.593791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.593818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.593906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.593934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.594912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.594939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.595893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.595920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.596927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.596956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.597857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.597886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.598034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.598072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.598193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.598220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.598314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.598340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.598454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.598481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.598563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.431 [2024-07-11 11:20:56.598590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.431 qpair failed and we were unable to recover it. 00:34:42.431 [2024-07-11 11:20:56.598704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.598732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.598842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.598870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.598999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.599896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.599985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.600827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.600974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.601943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.601970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.602957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.602984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.603889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.603916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.604004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.604034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.604120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.604147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.604229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.432 [2024-07-11 11:20:56.604255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.432 qpair failed and we were unable to recover it. 00:34:42.432 [2024-07-11 11:20:56.604378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.604408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.604487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.604518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.604633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.604662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.604789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.604817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.604903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.604931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.605920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.605947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.606908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.606990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.607898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.607925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.608888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.433 [2024-07-11 11:20:56.608977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.433 [2024-07-11 11:20:56.609015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.433 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.609148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.609252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.609385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.609562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.609712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.609851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.609967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.610902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.610929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.611947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.611974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.612143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.612253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.612415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.612545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.612692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.612857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.612975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.613096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.613261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.613442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.613584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.613729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.613923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.613950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.614042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.614069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.614185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.614212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.614326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.614354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.614459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.614486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.614573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.614601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.434 qpair failed and we were unable to recover it. 00:34:42.434 [2024-07-11 11:20:56.614689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.434 [2024-07-11 11:20:56.614717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.614846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.614874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.614971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.614999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.615113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.615139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.615250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.615277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.615372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.615399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.615540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.615567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.615690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.615720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.615884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.615923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.616946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.616974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.617936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.617963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.618913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.618940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.619891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.619918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.620004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.620032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.620186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.620213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.620359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.620387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.620527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.620555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.435 qpair failed and we were unable to recover it. 00:34:42.435 [2024-07-11 11:20:56.620670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.435 [2024-07-11 11:20:56.620700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.620854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.620883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.621027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.621202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.621340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.621516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.621669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.621863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.621989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.622968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.622995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.623865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.623979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.624136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.624254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.624397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.624540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.624683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.624845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.624885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.625932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.625959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.626074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.626102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.626241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.626268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.626358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.626385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.626514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.436 [2024-07-11 11:20:56.626543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.436 qpair failed and we were unable to recover it. 00:34:42.436 [2024-07-11 11:20:56.626626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.626652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.626743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.626781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.626897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.626924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.627897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.627923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.628901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.628985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.629900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.629926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.630891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.630988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.631132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.631240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.631401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.631514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.631666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.437 qpair failed and we were unable to recover it. 00:34:42.437 [2024-07-11 11:20:56.631816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.437 [2024-07-11 11:20:56.631845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.631955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.631982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.632907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.632935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.633906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.633934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.634942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.634970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.635119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.635146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.635262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.635289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.635413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.635455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.635580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.635609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.635763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.635791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.635874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.635900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.636017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.636044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.636149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.636176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.636940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.636972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.637088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.637114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.637236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.637262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.637400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.637520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.637551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.637640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.438 [2024-07-11 11:20:56.637668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.438 qpair failed and we were unable to recover it. 00:34:42.438 [2024-07-11 11:20:56.637823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.637853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.637966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.637992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.638910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.638936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.639884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.639913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.640962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.640989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.641116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.641290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.641430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.641577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.641724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.641883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.641995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.642963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.642992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.643086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.643113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.643200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.643228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.643374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.643401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.643540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.439 [2024-07-11 11:20:56.643567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.439 qpair failed and we were unable to recover it. 00:34:42.439 [2024-07-11 11:20:56.643655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.643683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.643797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.643838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.643967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.644166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.644312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.644426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.644568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.644708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.644870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.644915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.645864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.645892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.646912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.646939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.647907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.647934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.648888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.648916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.649010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.649037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.649167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.649193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.440 qpair failed and we were unable to recover it. 00:34:42.440 [2024-07-11 11:20:56.649276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.440 [2024-07-11 11:20:56.649303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.649385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.649412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.649526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.649552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.649672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.649700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.649828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.649858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.649986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.650967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.650994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.651112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.651139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.651230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.651257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.651374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.651400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.651517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.651548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.651679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.651719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.651862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.651892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.652971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.652997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.653895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.653924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.654894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.654921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.441 qpair failed and we were unable to recover it. 00:34:42.441 [2024-07-11 11:20:56.655064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.441 [2024-07-11 11:20:56.655090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.655203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.655230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.655346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.655373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.655505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.655545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.655692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.655720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.655842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.655872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.655963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.655990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.656967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.656993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.657893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.657921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.658886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.658912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.442 [2024-07-11 11:20:56.659959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.442 [2024-07-11 11:20:56.659985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.442 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.660914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.660941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.661953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.661980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.662119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.662263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.662414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.662581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.662728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.662881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.662997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.663166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.663308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.663478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.663621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.663783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.663946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.663973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.664123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.664277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.664420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.664547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.664687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.664832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.664975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.665114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.665257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.665401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.665562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.665707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.443 [2024-07-11 11:20:56.665856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.443 [2024-07-11 11:20:56.665883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.443 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.666890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.666918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.667889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.667918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.668910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.668937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.669972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.669999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.670860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.670888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.671005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.671033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.671182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.671210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.671339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.671366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.671503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.444 [2024-07-11 11:20:56.671531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.444 qpair failed and we were unable to recover it. 00:34:42.444 [2024-07-11 11:20:56.671673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.671700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.671785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.671812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.671960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.671986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.672903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.672930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.673885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.673912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.674930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.674957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.675888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.675916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.676971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.676998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.677144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.445 [2024-07-11 11:20:56.677172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.445 qpair failed and we were unable to recover it. 00:34:42.445 [2024-07-11 11:20:56.677260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.677289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.677387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.677415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.677504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.677531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.677649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.677676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.677800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.677827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.677921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.677948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.678878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.678994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.679896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.679985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.680860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.680982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.446 [2024-07-11 11:20:56.681779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.446 qpair failed and we were unable to recover it. 00:34:42.446 [2024-07-11 11:20:56.681861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.681888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.682925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.682952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.683878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.683980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.684950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.684976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.685970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.685997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.686905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.686998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.687030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.687152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.447 [2024-07-11 11:20:56.687180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.447 qpair failed and we were unable to recover it. 00:34:42.447 [2024-07-11 11:20:56.687265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.687294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.687378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.687403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.687533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.687559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.687655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.687681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.687764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.687801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.687883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.687910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.687991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.688872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.688899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.689886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.689999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.690882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.690908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.691915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.691943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.692054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.692081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.692169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.692196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.692325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.692353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.692467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.448 [2024-07-11 11:20:56.692494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.448 qpair failed and we were unable to recover it. 00:34:42.448 [2024-07-11 11:20:56.692609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.692636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.692723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.692751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.692878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.692906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.692992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.693954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.693981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.694939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.694967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.695888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.695915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.696904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.696931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.449 [2024-07-11 11:20:56.697932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.449 [2024-07-11 11:20:56.697960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.449 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.698938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.698965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.699876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.699990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.700902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.700929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.701931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.701958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.702085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.702111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.702229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.702256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.702400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.702427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.702507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.702534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.450 qpair failed and we were unable to recover it. 00:34:42.450 [2024-07-11 11:20:56.702620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.450 [2024-07-11 11:20:56.702648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.702794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.702822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.702934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.702960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.703889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.703915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.704904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.704933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.705951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.705978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.451 qpair failed and we were unable to recover it. 00:34:42.451 [2024-07-11 11:20:56.706095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.451 [2024-07-11 11:20:56.706122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.706233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.706355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.706502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.706621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.706745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.706879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.706999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.707908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.707935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.708863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.708983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.709157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.709302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.709442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.709556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.709703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.709862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.709901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.710877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.710993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.711841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.711980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.452 [2024-07-11 11:20:56.712017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.452 qpair failed and we were unable to recover it. 00:34:42.452 [2024-07-11 11:20:56.712137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.712280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.712420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.712536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.712682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.712825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.712970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.712998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.713902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.713997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.714952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.714980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.715113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.715141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.715253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.715280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.715395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.715422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.715537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.715564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.715722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.715770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.715871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.715899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.716913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.716998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.453 [2024-07-11 11:20:56.717938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.453 [2024-07-11 11:20:56.717970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.453 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.718897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.718978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.719936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.719963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.720077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.720104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.720244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.720271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.720407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.720448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.720540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.720568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.720708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.720736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.720893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.720920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.721935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.721961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.722879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.722906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.454 qpair failed and we were unable to recover it. 00:34:42.454 [2024-07-11 11:20:56.723838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.454 [2024-07-11 11:20:56.723865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.723945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.723971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.724858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.724973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.725902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.725930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.726958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.726985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.727904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.727931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.728042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.728068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.728191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.728218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.728313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.455 [2024-07-11 11:20:56.728340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.455 qpair failed and we were unable to recover it. 00:34:42.455 [2024-07-11 11:20:56.728460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.728487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.728575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.728604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.728708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.728735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.728856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.728883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.728961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.728988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.729863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.729890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.730874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.730988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.731895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.731995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.732896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.732979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.733961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.456 [2024-07-11 11:20:56.733990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.456 qpair failed and we were unable to recover it. 00:34:42.456 [2024-07-11 11:20:56.734100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.734244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.734365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.734508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.734653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.734777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.734898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.734925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.735897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.735989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.736901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.736927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.737945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.737973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.738960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.738991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.739074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.739101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.739182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.739208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.739293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.739320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.739413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.739441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.739532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.457 [2024-07-11 11:20:56.739559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.457 qpair failed and we were unable to recover it. 00:34:42.457 [2024-07-11 11:20:56.739673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.739701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.739783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.739808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.739895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.739922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.740968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.740995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.741949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.741989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.742895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.742923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.743908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.743994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.744897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.744923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.745064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.745091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.745210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.745237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.458 [2024-07-11 11:20:56.745352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.458 [2024-07-11 11:20:56.745379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.458 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.745475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.745505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.745645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.745671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.745789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.745817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.745933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.745959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.746892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.746919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.747913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.747940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.748904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.748930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.749933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.750045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.750071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.459 qpair failed and we were unable to recover it. 00:34:42.459 [2024-07-11 11:20:56.750160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.459 [2024-07-11 11:20:56.750188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.750272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.750299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.750410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.750437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.750550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.750577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.750699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.750727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.750820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.750847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.750975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.751909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.751936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.752963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.752990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.753941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.753967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.754952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.754979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.755893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.755919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.460 [2024-07-11 11:20:56.756004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.460 [2024-07-11 11:20:56.756031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.460 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.756936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.756963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.757951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.757978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.758855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.758974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.759952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.759980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.760925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.760953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.761942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.761970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.461 [2024-07-11 11:20:56.762091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.461 [2024-07-11 11:20:56.762118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.461 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.762213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.762240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.762369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.762396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.762535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.762561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.762658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.762687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.762777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.762806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.762894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.762921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.763871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.763991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.764940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.764967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.765906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.765993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.766896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.766982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.462 [2024-07-11 11:20:56.767760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.462 qpair failed and we were unable to recover it. 00:34:42.462 [2024-07-11 11:20:56.767856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.767885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.767970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.767997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.768922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.768956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.769869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.769895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.770883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.770971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.771909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.771936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.463 [2024-07-11 11:20:56.772894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.463 qpair failed and we were unable to recover it. 00:34:42.463 [2024-07-11 11:20:56.772987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.773965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.773992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.774915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.774942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.775946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.775972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.776916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.776943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.777970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.777997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.778082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.778109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.778196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.778224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.778334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.778363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.778465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.464 [2024-07-11 11:20:56.778505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.464 qpair failed and we were unable to recover it. 00:34:42.464 [2024-07-11 11:20:56.778622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.778651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.778768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.778795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.778886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.778916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.779943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.779970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.780886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.780914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.781916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.781943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.782955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.782994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.783169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.783283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.783421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.783587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.783727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.783878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.783972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.784000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.784123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.784150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.784265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.784292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.784377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.784404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.465 qpair failed and we were unable to recover it. 00:34:42.465 [2024-07-11 11:20:56.784491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.465 [2024-07-11 11:20:56.784518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.784611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.784639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.784750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.784785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.784867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.784898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.784985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.785954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.785979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.786904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.786932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.787927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.787954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.788961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.788987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.789888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.789978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.466 [2024-07-11 11:20:56.790005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.466 qpair failed and we were unable to recover it. 00:34:42.466 [2024-07-11 11:20:56.790149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.790256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.790362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.790502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.790615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.790738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.790875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.790902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.791956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.791985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.792935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.792962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.793901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.793928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.794883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.794910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.467 qpair failed and we were unable to recover it. 00:34:42.467 [2024-07-11 11:20:56.795001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.467 [2024-07-11 11:20:56.795027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.795911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.795938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.796953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.796981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.797149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.797306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.797428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.797537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.797683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.797827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.797977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.798910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.798936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.799964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.799991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.468 [2024-07-11 11:20:56.800891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.468 [2024-07-11 11:20:56.800919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.468 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.801951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.801978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.802918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.802944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.803948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.803975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.804908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.804992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.805158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.805280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.805417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.805591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.805729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.805898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.805925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.806013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.806040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.806150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.806176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.806256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.806286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.806373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.806404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.806500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.806528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.469 [2024-07-11 11:20:56.806640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.469 [2024-07-11 11:20:56.806667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.469 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.806784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.806812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.806894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.807925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.807952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.808044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.808070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.808163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.808190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.808281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.808309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.808398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.808425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.753 [2024-07-11 11:20:56.808504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.753 [2024-07-11 11:20:56.808532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.753 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.808630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.808669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.808763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.808792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.808883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.808911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.808991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.809906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.809936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.810906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.810937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.811878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.811974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.812905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.812994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.813021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.813112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.813139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.813224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.813253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.813371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.813398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.754 [2024-07-11 11:20:56.813531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.754 [2024-07-11 11:20:56.813558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.754 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.813668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.813695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.813791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.813820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.813920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.813949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.814896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.814923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.815899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.815989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.816880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.816995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.817865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.817985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.818013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.818125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.818153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.755 [2024-07-11 11:20:56.818249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.755 [2024-07-11 11:20:56.818277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.755 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.818390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.818417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.818505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.818534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.818625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.818653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.818745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.818782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.818864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.818890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.818978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.819895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.819922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.820951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.820978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.821920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.821948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.822964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.822995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.823138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.823165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.823254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.823281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.823394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.756 [2024-07-11 11:20:56.823423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.756 qpair failed and we were unable to recover it. 00:34:42.756 [2024-07-11 11:20:56.823515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.823543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.823668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.823708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.823817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.823846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.823965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.823992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.824889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.824916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.825972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.825998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.826855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.826882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.827896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.827923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.828003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.828030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.828111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.828142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.828234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.828262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.828374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.828401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.757 qpair failed and we were unable to recover it. 00:34:42.757 [2024-07-11 11:20:56.828484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.757 [2024-07-11 11:20:56.828511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.828620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.828647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.828765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.828792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.828889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.828915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.829001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.829029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.829111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.829138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.829233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.829260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.829353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.829379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.829494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.758 [2024-07-11 11:20:56.829521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.758 qpair failed and we were unable to recover it. 00:34:42.758 [2024-07-11 11:20:56.829632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.829660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.829746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.829780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.829864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.829891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.829975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.830906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.830936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.831885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.831911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.832888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.832979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.833958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.833985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.759 [2024-07-11 11:20:56.834881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.759 [2024-07-11 11:20:56.834908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.759 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.834993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.835925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.835952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.836963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.836990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.837921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.837947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.838957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.838984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.839886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.839976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.760 [2024-07-11 11:20:56.840002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.760 qpair failed and we were unable to recover it. 00:34:42.760 [2024-07-11 11:20:56.840085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.840929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.840955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.841902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.841986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.842886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.842913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.843927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.843954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.844904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.844932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.845015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.761 [2024-07-11 11:20:56.845043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.761 qpair failed and we were unable to recover it. 00:34:42.761 [2024-07-11 11:20:56.845156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.845183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.845276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.845303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.845394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.845421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.845549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.845576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.845718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.845745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.845835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.845862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.845980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.846878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.846977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.847916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.847942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.848899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.848993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.849892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.849981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.850008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.762 qpair failed and we were unable to recover it. 00:34:42.762 [2024-07-11 11:20:56.850121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.762 [2024-07-11 11:20:56.850149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.850234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.850260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.850350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.850378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.850484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.850511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.850621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.850763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.850791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.850907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.850934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.851892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.851981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.852928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.852955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.853906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.853993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.763 [2024-07-11 11:20:56.854843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.763 qpair failed and we were unable to recover it. 00:34:42.763 [2024-07-11 11:20:56.854960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.854987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.855141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.855248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.855407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.855551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.855701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.855873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.855995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.856877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.856997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.857875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.857971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.858087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.858232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.858427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.858571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.858713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.858857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.858883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.859889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.859917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.860063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.860208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.860389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.860503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.860616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.764 [2024-07-11 11:20:56.860762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.764 qpair failed and we were unable to recover it. 00:34:42.764 [2024-07-11 11:20:56.860859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.860887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.861950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.861977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.862916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.862995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.863933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.863961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.864916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.864955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.865967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.865995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.866084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.866110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.866220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.866247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.765 [2024-07-11 11:20:56.866331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.765 [2024-07-11 11:20:56.866358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.765 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.866479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.866508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.866622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.866651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.866792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.866821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.866910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.867901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.867928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.868898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.868930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.869867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.869894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.870941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.870969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.871947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.871973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.766 qpair failed and we were unable to recover it. 00:34:42.766 [2024-07-11 11:20:56.872085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.766 [2024-07-11 11:20:56.872111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.872198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.872225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.872366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.872393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.872488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.872528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.872655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.872684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.872798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.872826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.872938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.872966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.873872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.873900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.874895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.874927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.875897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.875924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.876901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.767 [2024-07-11 11:20:56.876928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.767 qpair failed and we were unable to recover it. 00:34:42.767 [2024-07-11 11:20:56.877016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.877160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.877334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.877483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.877650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.877817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.877932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.877961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.878878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.878917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.879963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.879991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.880902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.880931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.881917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.881944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.882031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.882058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.882142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.882168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.882274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.882301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.882414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.882440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.768 [2024-07-11 11:20:56.882557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.768 [2024-07-11 11:20:56.882583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.768 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.882702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.882730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.882834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.882864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.883971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.883998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.884923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.884951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.885041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.885068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.885215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.885243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.885384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.885411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.885540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.885571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.885688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.885716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.885872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.885901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.886935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.886962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.887888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.887915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.888036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.888063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.888178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.888206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.769 [2024-07-11 11:20:56.888312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.769 [2024-07-11 11:20:56.888352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.769 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.888497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.888525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.888614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.888641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.888745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.888778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.888891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.888918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.889966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.889993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.890909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.890936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.891962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.891989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.892871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.892981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.770 [2024-07-11 11:20:56.893924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.770 [2024-07-11 11:20:56.893951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.770 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.894907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.894933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.895904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.895998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.896889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.896917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.897902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.897930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.898089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.898117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.898211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.898238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.898324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.898351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.898477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.898503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.898592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.898620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.771 qpair failed and we were unable to recover it. 00:34:42.771 [2024-07-11 11:20:56.898777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.771 [2024-07-11 11:20:56.898806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.898916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.898956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.899897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.899924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.900934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.900960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.901939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.901966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.902089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.902397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.902577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.902747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.902909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.902989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.903939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.903966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.904124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.904163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.904280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.904308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.904429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.904456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.904573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.772 [2024-07-11 11:20:56.904601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.772 qpair failed and we were unable to recover it. 00:34:42.772 [2024-07-11 11:20:56.904744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.904777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.904896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.904923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.905869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.905983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.906913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.906952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.907961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.907989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.908947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.908974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.909901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.909929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.910015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.910042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.910134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.910161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.910246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.773 [2024-07-11 11:20:56.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.773 qpair failed and we were unable to recover it. 00:34:42.773 [2024-07-11 11:20:56.910413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.910440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.910583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.910613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.910694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.910720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.910828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.910855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.910939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.910967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.911936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.911968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.912902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.912929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.913927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.913954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.914869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.914986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.774 [2024-07-11 11:20:56.915822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.774 qpair failed and we were unable to recover it. 00:34:42.774 [2024-07-11 11:20:56.915932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.915960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.916915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.916943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.917062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 406199 Killed "${NVMF_APP[@]}" "$@" 00:34:42.775 [2024-07-11 11:20:56.917205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.917379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.917495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:42.775 [2024-07-11 11:20:56.917616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:42.775 [2024-07-11 11:20:56.917765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.917908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.917935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 wit 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:42.775 h addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.918027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:42.775 [2024-07-11 11:20:56.918136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.775 [2024-07-11 11:20:56.918279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.918453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.918568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.918693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.918845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.918972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.918999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.919960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.919988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.920084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.920112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.920222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.920248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.920336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.775 [2024-07-11 11:20:56.920363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.775 qpair failed and we were unable to recover it. 00:34:42.775 [2024-07-11 11:20:56.920473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.920501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.920582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.920609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.920745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.920795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.920894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.920922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.921935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.921964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.922113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.922141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=406757 00:34:42.776 [2024-07-11 11:20:56.922257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.922286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:42.776 [2024-07-11 11:20:56.922395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 406757 00:34:42.776 [2024-07-11 11:20:56.922423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.922505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.922533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 406757 ']' 00:34:42.776 [2024-07-11 11:20:56.922646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.922676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.922766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.922793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.776 [2024-07-11 11:20:56.922909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.922936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:42.776 [2024-07-11 11:20:56.923048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.776 [2024-07-11 11:20:56.923161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.923267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:42.776 [2024-07-11 11:20:56.923294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 11:20:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.776 [2024-07-11 11:20:56.923436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.923559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.923672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.923795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.923919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.923946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.924879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.924905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.925044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.925069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.925162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.925187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.925307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.776 [2024-07-11 11:20:56.925334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.776 qpair failed and we were unable to recover it. 00:34:42.776 [2024-07-11 11:20:56.925423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.925450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.925549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.925592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.925717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.925744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.925843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.925870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.925956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.925983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.926934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.926962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.927904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.927931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.928949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.928976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.929891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.929917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.777 qpair failed and we were unable to recover it. 00:34:42.777 [2024-07-11 11:20:56.930781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.777 [2024-07-11 11:20:56.930809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.930908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.930936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.931904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.931989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.932944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.932971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.933920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.933998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.934963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.934990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.935899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.778 qpair failed and we were unable to recover it. 00:34:42.778 [2024-07-11 11:20:56.935981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.778 [2024-07-11 11:20:56.936014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.936874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.936902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.937873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.937986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.938907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.938991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.939954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.939981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.940900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.940987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.941014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.941122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.941149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.941260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.779 [2024-07-11 11:20:56.941291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.779 qpair failed and we were unable to recover it. 00:34:42.779 [2024-07-11 11:20:56.941380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.941407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.941492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.941522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.941663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.941691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.941806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.941846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.941934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.941962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.942909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.942936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.943866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.943893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.944891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.944918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.780 [2024-07-11 11:20:56.945779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.780 qpair failed and we were unable to recover it. 00:34:42.780 [2024-07-11 11:20:56.945859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.945887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.945969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.945997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.946899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.946926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.947894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.947921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.948900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.948926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.949947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.949974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.950925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.950952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.951076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.951103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.951220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.781 [2024-07-11 11:20:56.951247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.781 qpair failed and we were unable to recover it. 00:34:42.781 [2024-07-11 11:20:56.951340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.951367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.951444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.951472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.951596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.951626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.951708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.951736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.951869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.951897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.951981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.952961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.952988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.953887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.953915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.954966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.954992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.955892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.955975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.956003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.956118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.956146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.956256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.956295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.956400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.956440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.956535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.782 [2024-07-11 11:20:56.956563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.782 qpair failed and we were unable to recover it. 00:34:42.782 [2024-07-11 11:20:56.956684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.956710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.956835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.956862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.956983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.957910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.957937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.958948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.958975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.959929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.959956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.960903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.960990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.783 [2024-07-11 11:20:56.961771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.783 qpair failed and we were unable to recover it. 00:34:42.783 [2024-07-11 11:20:56.961851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.961877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.961956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.961982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.962886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.962982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.963911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.963995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.964905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.964932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.965897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.965925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.966034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.966061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.784 [2024-07-11 11:20:56.966157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.784 [2024-07-11 11:20:56.966184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.784 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.966328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.966355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.966454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.966481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.966564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.966593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.966733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.966781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.966866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.966892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.966972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.966999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.967930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.967957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.968880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.968909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969358] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:42.785 [2024-07-11 11:20:56.969385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969430] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.785 [2024-07-11 11:20:56.969498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.969892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.969918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.970899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.970928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.971047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.971074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.971189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.971217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.971319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.971346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.785 [2024-07-11 11:20:56.971487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.785 [2024-07-11 11:20:56.971514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.785 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.971613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.971654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.971776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.971804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.971918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.971944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.972894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.972921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.973965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.973992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.974939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.974968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.975858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.975885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.976864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.786 [2024-07-11 11:20:56.977009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.786 [2024-07-11 11:20:56.977060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.786 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.977928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.977955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.978949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.978976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.979860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.979974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.980960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.980986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.981961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.981987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.982107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.982134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.982222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.982248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.787 [2024-07-11 11:20:56.982362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.787 [2024-07-11 11:20:56.982391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.787 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.982483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.982510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.982624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.982655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.982767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.982795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.982878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.982904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.983877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.983994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.984960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.984988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.985925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.985952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.986911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.986938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.987052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.987088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.788 [2024-07-11 11:20:56.987174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.788 [2024-07-11 11:20:56.987200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.788 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.987306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.987335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.987453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.987480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.987595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.987621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.987733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.987769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.987878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.987905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.988942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.988971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.989869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.989895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.990941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.990967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.991916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.991944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.992028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.992055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.992138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.992164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.992249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.992276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.992391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.992417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.992529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.992555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.789 [2024-07-11 11:20:56.992640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.789 [2024-07-11 11:20:56.992667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.789 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.992779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.992806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.992890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.992917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.993924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.993951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.994098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.994243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.994382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.994521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.994691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.994867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.994991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.995136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.995281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.995452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.995570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.995713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.995861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.995888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.996912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.996939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.997958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.997986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.998102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.998130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.790 [2024-07-11 11:20:56.998239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.790 [2024-07-11 11:20:56.998266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.790 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.998406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.998432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.998545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.998582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.998672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.998699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.998801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.998829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.998949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.998976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:56.999909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:56.999937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.000885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.000998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.001162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.001340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.001483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.001624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.001763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.001896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.001922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.002942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.002969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.003120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.003147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.003284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.003310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.003448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.003474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.003565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.003596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.003712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.003739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.003869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.003910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.004031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.791 [2024-07-11 11:20:57.004059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.791 qpair failed and we were unable to recover it. 00:34:42.791 [2024-07-11 11:20:57.004197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.004223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.792 [2024-07-11 11:20:57.004339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.004365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.004501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.004527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.004612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.004637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.004774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.004801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.004887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.004913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.005936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.005964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.006910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.006936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.007889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.007974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.008880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.009020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.009050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.009171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.009201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.009314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.009341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.009492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.009533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.009637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.009677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.792 qpair failed and we were unable to recover it. 00:34:42.792 [2024-07-11 11:20:57.009781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.792 [2024-07-11 11:20:57.009810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.009898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.009927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.010856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.010972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.011940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.011965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.012917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.012945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.013913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.013941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.014061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.014087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.014172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.014199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.014285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.014311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.793 [2024-07-11 11:20:57.014428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.793 [2024-07-11 11:20:57.014456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.793 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.014572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.014603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.014689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.014716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.014817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.014844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.014960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.014986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.015876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.015903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.016902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.016929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.017886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.017978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.018880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.018907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.019893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.794 [2024-07-11 11:20:57.019924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.794 qpair failed and we were unable to recover it. 00:34:42.794 [2024-07-11 11:20:57.020022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.020930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.020957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.021894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.021923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.022877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.022906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.023917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.023944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.024964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.024991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.025089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.025116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.025226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.025254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.795 [2024-07-11 11:20:57.025381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.795 [2024-07-11 11:20:57.025421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.795 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.025560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.025600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.025731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.025766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.025883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.025910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.025996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.026962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.026988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.027916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.027943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.028956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.028983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.029910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.029995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.796 qpair failed and we were unable to recover it. 00:34:42.796 [2024-07-11 11:20:57.030914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.796 [2024-07-11 11:20:57.030941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.031868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.031908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.032958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.032984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.033890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.033919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.034889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.035025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.035161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.035188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.035300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.797 [2024-07-11 11:20:57.035327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.797 qpair failed and we were unable to recover it. 00:34:42.797 [2024-07-11 11:20:57.035411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.035438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.035555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.035581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.035700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.035728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.035819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.035846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.035937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.035965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.036907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.036934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.037886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.037913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:42.798 [2024-07-11 11:20:57.038591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.038844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.038884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.039914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.039942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.040026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.040053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.040133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.040161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.040276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.040303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.040415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.798 [2024-07-11 11:20:57.040443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.798 qpair failed and we were unable to recover it. 00:34:42.798 [2024-07-11 11:20:57.040563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.040592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.040679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.040706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.040819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.040846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.040964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.040991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.041105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.041132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.041239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.041265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.041379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.041407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.041539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.041579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.041737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.041785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.041935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.041964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.042911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.042938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.043964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.043991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.044909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.044992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.045880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.045999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.046026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.799 [2024-07-11 11:20:57.046115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.799 [2024-07-11 11:20:57.046141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.799 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.046231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.046259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.046379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.046406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.046519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.046548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.046692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.046718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.046827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.046867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.046969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.046998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.047152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.047328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.047473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.047589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.047731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.047859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.047979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.048954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.048980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.049850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.049878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.050951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.050978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.051087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.051113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.051231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.051258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.051373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.051399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.051513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.051540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.051649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.051675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.800 qpair failed and we were unable to recover it. 00:34:42.800 [2024-07-11 11:20:57.051769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.800 [2024-07-11 11:20:57.051796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.051873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.051899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.051994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.052884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.052911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.053876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.053906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.054883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.054911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.055963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.055991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.056077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.801 [2024-07-11 11:20:57.056105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.801 qpair failed and we were unable to recover it. 00:34:42.801 [2024-07-11 11:20:57.056192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.056221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.056329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.056371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.056492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.056522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.056638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.056666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.056793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.056822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.056943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.057895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.057923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.058939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.058968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.059955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.059984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.060963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.060991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.061109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.061136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.061223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.061251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.061362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.061389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.061502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.061529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.061641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.802 [2024-07-11 11:20:57.061681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.802 qpair failed and we were unable to recover it. 00:34:42.802 [2024-07-11 11:20:57.061783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.061813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.061907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.061936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.062897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.062927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.063917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.063945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.064853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.064974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.065896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.065924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.066886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.066998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.067026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.803 [2024-07-11 11:20:57.067156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.803 [2024-07-11 11:20:57.067186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.803 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.067297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.067324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.067409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.067437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.067532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.067559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.067649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.067676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.067770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.067799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.067882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.067909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.068887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.068927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.069902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.069984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.070936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.070963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.071917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.071945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.072059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.072087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.072180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.072208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.072348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.072376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.072467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.072494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.072640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.072671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.804 [2024-07-11 11:20:57.072757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.804 [2024-07-11 11:20:57.072785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.804 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.072898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.072926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.073934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.073962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.074993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.075903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.075998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.076886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.076975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.077937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.077965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.078060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.078092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.078177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.078205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.805 qpair failed and we were unable to recover it. 00:34:42.805 [2024-07-11 11:20:57.078322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.805 [2024-07-11 11:20:57.078350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.078441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.806 [2024-07-11 11:20:57.078469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.078556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.806 [2024-07-11 11:20:57.078586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.078708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.806 [2024-07-11 11:20:57.078737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.078857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.806 [2024-07-11 11:20:57.078885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.078990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.806 [2024-07-11 11:20:57.079018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.079134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.806 [2024-07-11 11:20:57.079163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.806 qpair failed and we were unable to recover it. 00:34:42.806 [2024-07-11 11:20:57.079281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.079308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.079392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.079419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.079538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.079578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.079675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.079704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.079838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.079867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.079957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.079984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.080101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.080241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.080354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.080530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.080705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.080870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.080988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.081100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.081246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.081362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.081516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.081711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.081866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.081903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.082969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.082996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.083972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.083998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.084108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.084134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.084240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.084267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.084360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.084387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.084475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.084503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.084618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.084646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.807 [2024-07-11 11:20:57.084751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.807 [2024-07-11 11:20:57.084801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.807 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.084917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.084945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.085956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.085984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.086887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.086978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.087936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.087977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.088935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.088962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.089905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.089995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.090023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.090122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.090148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.808 qpair failed and we were unable to recover it. 00:34:42.808 [2024-07-11 11:20:57.090242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.808 [2024-07-11 11:20:57.090268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.090381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.090407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.090523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.090549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.090659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.090685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.090776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.090808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.090901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.090929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.091963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.091990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.092887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.092973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.093906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.093932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.094921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.094952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.095037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.095069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.095156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.095183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.095295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.095321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.095413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.095440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.095549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.095575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.809 qpair failed and we were unable to recover it. 00:34:42.809 [2024-07-11 11:20:57.095672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.809 [2024-07-11 11:20:57.095699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.095782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.095811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.095898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.096957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.096986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.097886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.097913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.098911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.098998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.099148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.099288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.099461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.099579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.099699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.099843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.099885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.100867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.100979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.101007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.101122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.101150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.810 qpair failed and we were unable to recover it. 00:34:42.810 [2024-07-11 11:20:57.101232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.810 [2024-07-11 11:20:57.101260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.101350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.101377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.101469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.101497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.101623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.101651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.101738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.101773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.101912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.101939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.102058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.102201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.102344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.102545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.102671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.102852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.102974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.103917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.103943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.104880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.104908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.105932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.811 [2024-07-11 11:20:57.105960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.811 qpair failed and we were unable to recover it. 00:34:42.811 [2024-07-11 11:20:57.106076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.106876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.106991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.107913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.107942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.108965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.108996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.109857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.109971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.110936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.110964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.111049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.111080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.111194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.111222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.111316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.111345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.812 [2024-07-11 11:20:57.111466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.812 [2024-07-11 11:20:57.111495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.812 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.111616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.111643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.111762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.111790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.111877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.111904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.111990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.112902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.112986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.113864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.113984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.114901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.114928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.115889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.115982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.116096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.116271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.116445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.116598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.116763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.116882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.116910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.117056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.813 [2024-07-11 11:20:57.117084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.813 qpair failed and we were unable to recover it. 00:34:42.813 [2024-07-11 11:20:57.117201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.117229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.117367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.117395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.117480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.117508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.117598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.117629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.117768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.117808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.117895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.117924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.118922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.118950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.119886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.119915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.120891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.120980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.121899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.121927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.122042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.122069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.122210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.122237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.122348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.122374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.122461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.814 [2024-07-11 11:20:57.122489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.814 qpair failed and we were unable to recover it. 00:34:42.814 [2024-07-11 11:20:57.122572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.122599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.122684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.122712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.122828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.122855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.122942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.122969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.123943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.123973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.124969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.124997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.125968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.125995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.126877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.126904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.127020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.815 [2024-07-11 11:20:57.127056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.815 qpair failed and we were unable to recover it. 00:34:42.815 [2024-07-11 11:20:57.127177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.127291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.127409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.127546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.127655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.127801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.127941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.127969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.128959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.128986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.129905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.129989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.816 [2024-07-11 11:20:57.130678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.816 [2024-07-11 11:20:57.130682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.816 [2024-07-11 11:20:57.130707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.816 [2024-07-11 11:20:57.130708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 [2024-07-11 11:20:57.130719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:42.816 [2024-07-11 11:20:57.130920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.130864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:42.816 [2024-07-11 11:20:57.130946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.130892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:42.816 [2024-07-11 11:20:57.130895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:42.816 [2024-07-11 11:20:57.131042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.131159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.131279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.131404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.131533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.131646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.816 [2024-07-11 11:20:57.131796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.816 [2024-07-11 11:20:57.131824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.816 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.131919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.131946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.132961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.132988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.133901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.133989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.134919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.134946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.135970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.135998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.817 [2024-07-11 11:20:57.136968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.817 [2024-07-11 11:20:57.136996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.817 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.137932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.137960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.138895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.138922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.139963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.139995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.140924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.140952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.141911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.818 qpair failed and we were unable to recover it. 00:34:42.818 [2024-07-11 11:20:57.141992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.818 [2024-07-11 11:20:57.142019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.142907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.142993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.143891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.143919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.144967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.144994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.145906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.145934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.146026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.146053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.146134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.146161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.146241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.819 [2024-07-11 11:20:57.146267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.819 qpair failed and we were unable to recover it. 00:34:42.819 [2024-07-11 11:20:57.146356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.146384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.146485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.146527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.146645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.146673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.146799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.146826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.146921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.146948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.147893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.147921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.148887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.148916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.149896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.149988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.150878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.150975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.151011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.151114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.151142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.151233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.151260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.151342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.820 [2024-07-11 11:20:57.151369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.820 qpair failed and we were unable to recover it. 00:34:42.820 [2024-07-11 11:20:57.151479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.151508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.151609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.151650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.151750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.151803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.151899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.151927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:42.821 [2024-07-11 11:20:57.152835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.821 [2024-07-11 11:20:57.152862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:42.821 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.152941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.152968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.153959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.153990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.154939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.154979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.155961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.155988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.156069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.156096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.156217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.156244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.156327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.156356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.156451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.156480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.156561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.156588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.090 [2024-07-11 11:20:57.156670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.090 [2024-07-11 11:20:57.156698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.090 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.156813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.156846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.156926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.156953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.157958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.157986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.158939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.158967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.159932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.159959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.160883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.160910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.091 qpair failed and we were unable to recover it. 00:34:43.091 [2024-07-11 11:20:57.161769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.091 [2024-07-11 11:20:57.161815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.161930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.161959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.162941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.162968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.163957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.163984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.164893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.164919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.165911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.165937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.092 [2024-07-11 11:20:57.166858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.092 qpair failed and we were unable to recover it. 00:34:43.092 [2024-07-11 11:20:57.166980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.167846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.167887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.168913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.168992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.169900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.169992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.170912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.170940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.171920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.171947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.172063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.093 [2024-07-11 11:20:57.172090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.093 qpair failed and we were unable to recover it. 00:34:43.093 [2024-07-11 11:20:57.172173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.172279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.172391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.172501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.172616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.172761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.172883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.172910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.173970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.173996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.174828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.174869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.175907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.175936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.176019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.176045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.094 qpair failed and we were unable to recover it. 00:34:43.094 [2024-07-11 11:20:57.176163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.094 [2024-07-11 11:20:57.176191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.176306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.176428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.176550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.176663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.176787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.176905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.176991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.177897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.177997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.178868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.178899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.179878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.179905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.180969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.180996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.181080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.181108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.095 qpair failed and we were unable to recover it. 00:34:43.095 [2024-07-11 11:20:57.181194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.095 [2024-07-11 11:20:57.181222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.181330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.181357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.181443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.181470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.181576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.181603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.181691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.181719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.181861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.181888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.181981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.182883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.182910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.183966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.183993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.184886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.184913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.185892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.185921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.186013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.186040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.096 [2024-07-11 11:20:57.186128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.096 [2024-07-11 11:20:57.186155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.096 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.186243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.186271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.186354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.186381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.186500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.186527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.186629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.186669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.186782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.186823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.186927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.186956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.187934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.187973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.188916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.188944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.189900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.189927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.190918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.190947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.191033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.191061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.191154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.191181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.097 [2024-07-11 11:20:57.191265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.097 [2024-07-11 11:20:57.191292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.097 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.191405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.191432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.191523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.191550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.191634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.191661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.191773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.191802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.191897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.191926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.192891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.192982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.193902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.193930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.194956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.194983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.195064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.195091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.195176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.195205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.195295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.195322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.195413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.098 [2024-07-11 11:20:57.195440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.098 qpair failed and we were unable to recover it. 00:34:43.098 [2024-07-11 11:20:57.195525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.195552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.195639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.195764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.195791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.195874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.195902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.195988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.196911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.196939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.197894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.197985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.198901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.198928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.199930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.199961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.200055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.200082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.200204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.200230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.200314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.099 [2024-07-11 11:20:57.200340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.099 qpair failed and we were unable to recover it. 00:34:43.099 [2024-07-11 11:20:57.200430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.200456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.200539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.200566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.200653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.200679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.200796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.200824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.200914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.200941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.201889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.201917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.202890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.202917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.203919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.203946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.204910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.204994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.205020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.205111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.100 [2024-07-11 11:20:57.205137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.100 qpair failed and we were unable to recover it. 00:34:43.100 [2024-07-11 11:20:57.205225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.205338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.205457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.205574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.205680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.205800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.205941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.205968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.206887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.206974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.207937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.207964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.208884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.208911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.209907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.209993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.210020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.101 [2024-07-11 11:20:57.210096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.101 [2024-07-11 11:20:57.210124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.101 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.210966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.210993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.211898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.211989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.212962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.212989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.213908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.213995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.214898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.214925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.215012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.215039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.102 [2024-07-11 11:20:57.215144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.102 [2024-07-11 11:20:57.215170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.102 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.215901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.215927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.216910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.216985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.217937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.217967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.218905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.218991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.219018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.103 qpair failed and we were unable to recover it. 00:34:43.103 [2024-07-11 11:20:57.219093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.103 [2024-07-11 11:20:57.219120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.219899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.219926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.220912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.220938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.221901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.221992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.222877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.222905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.223898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.223925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.104 qpair failed and we were unable to recover it. 00:34:43.104 [2024-07-11 11:20:57.224009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.104 [2024-07-11 11:20:57.224039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.224901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.224927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.225936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.225964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.226964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.226991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.227902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.227990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.105 [2024-07-11 11:20:57.228903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.105 [2024-07-11 11:20:57.228931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.105 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.229940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.229967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.230899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.230986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.231883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.231910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.232932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.232957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.233033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.233059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.233199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.233226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.233302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.233329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.233413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.233439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.233516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.233543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.106 [2024-07-11 11:20:57.233626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.106 [2024-07-11 11:20:57.233651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.106 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.233731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.233768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.233860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.233886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.233968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.233994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.234974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.235938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.235966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.236042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.236069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.107 qpair failed and we were unable to recover it. 00:34:43.107 [2024-07-11 11:20:57.236159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.107 [2024-07-11 11:20:57.236186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.236272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.236300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.236391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.236417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.236522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.236562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.236663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.236692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.236803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.236831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.236949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.236977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.237915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.237941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.238916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.238942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.239909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.239937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.240029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.240056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.240154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.240182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.240273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.240299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.240392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.240418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.240513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.108 [2024-07-11 11:20:57.240541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.108 qpair failed and we were unable to recover it. 00:34:43.108 [2024-07-11 11:20:57.240649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.240689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.240898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.240927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74b8000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.241897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.241984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.242009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219c600 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 A controller has encountered a failure and is being reset. 00:34:43.109 [2024-07-11 11:20:57.242116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.242145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.242236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.242264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.242380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.242407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.242521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.242548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f74c0000b90 with addr=10.0.0.2, port=4420 00:34:43.109 qpair failed and we were unable to recover it. 00:34:43.109 [2024-07-11 11:20:57.242655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.109 [2024-07-11 11:20:57.242693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21aa5b0 with addr=10.0.0.2, port=4420 00:34:43.109 [2024-07-11 11:20:57.242712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21aa5b0 is same with the state(5) to be set 00:34:43.109 [2024-07-11 11:20:57.242739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21aa5b0 (9): Bad file descriptor 00:34:43.109 [2024-07-11 11:20:57.242765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:43.109 [2024-07-11 11:20:57.242780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:43.109 [2024-07-11 11:20:57.242798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:43.109 Unable to reset the controller. 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 Malloc0 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 [2024-07-11 11:20:57.311732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 [2024-07-11 11:20:57.339993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.109 11:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 406297 00:34:44.043 Controller properly reset. 00:34:49.298 Initializing NVMe Controllers 00:34:49.298 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:49.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:49.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:49.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:49.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:49.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:49.298 Initialization complete. Launching workers. 00:34:49.298 Starting thread on core 1 00:34:49.298 Starting thread on core 2 00:34:49.298 Starting thread on core 3 00:34:49.298 Starting thread on core 0 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:49.298 00:34:49.298 real 0m10.678s 00:34:49.298 user 0m33.414s 00:34:49.298 sys 0m7.579s 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.298 ************************************ 00:34:49.298 END TEST nvmf_target_disconnect_tc2 00:34:49.298 ************************************ 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:49.298 rmmod nvme_tcp 00:34:49.298 rmmod nvme_fabrics 00:34:49.298 rmmod nvme_keyring 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 406757 ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 406757 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 406757 ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 406757 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406757 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406757' 00:34:49.298 killing process with pid 406757 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 406757 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 406757 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:49.298 11:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.199 11:21:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:51.199 00:34:51.199 real 0m15.523s 00:34:51.199 user 0m58.828s 00:34:51.199 sys 0m10.045s 00:34:51.199 11:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.199 11:21:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:51.199 ************************************ 00:34:51.199 END TEST nvmf_target_disconnect 00:34:51.199 ************************************ 00:34:51.199 11:21:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:51.199 11:21:05 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:51.199 11:21:05 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:51.199 11:21:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.199 11:21:05 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:51.199 00:34:51.199 real 27m3.536s 00:34:51.199 user 73m51.052s 00:34:51.199 sys 6m29.089s 00:34:51.199 11:21:05 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.199 11:21:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.199 ************************************ 00:34:51.199 END TEST nvmf_tcp 00:34:51.199 ************************************ 00:34:51.199 11:21:05 -- common/autotest_common.sh@1142 -- # return 0 00:34:51.199 11:21:05 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:51.199 11:21:05 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:51.199 11:21:05 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:51.199 11:21:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.199 11:21:05 -- common/autotest_common.sh@10 -- # set +x 00:34:51.457 ************************************ 00:34:51.457 START TEST spdkcli_nvmf_tcp 00:34:51.457 ************************************ 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:51.457 * Looking for test storage... 00:34:51.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:51.457 11:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=408062 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 408062 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 408062 ']' 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:51.458 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.458 [2024-07-11 11:21:05.739724] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:34:51.458 [2024-07-11 11:21:05.739827] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408062 ] 00:34:51.458 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.458 [2024-07-11 11:21:05.796390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.717 [2024-07-11 11:21:05.881440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.717 [2024-07-11 11:21:05.881445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.717 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:51.717 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:51.717 11:21:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:51.717 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:51.717 11:21:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.717 11:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:51.717 11:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:51.717 11:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:51.717 11:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:51.717 11:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.717 11:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:51.717 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:51.717 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:51.717 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:51.717 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:51.717 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:51.717 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:51.717 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:51.717 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:51.717 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:51.717 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:51.718 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:51.718 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:51.718 ' 00:34:54.248 [2024-07-11 11:21:08.594115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.654 [2024-07-11 11:21:09.814304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:58.183 [2024-07-11 11:21:12.073418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:00.081 [2024-07-11 11:21:14.015545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:01.451 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:01.451 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:01.451 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:01.451 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:01.451 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:01.451 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:01.451 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:01.451 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:01.451 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:01.451 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:01.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:01.451 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:01.451 11:21:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.709 11:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:01.709 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:01.709 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:01.709 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:01.709 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:01.709 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:01.709 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:01.709 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:01.709 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:01.709 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:01.709 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:01.709 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:01.709 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:01.709 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:01.709 ' 00:35:06.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:06.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:06.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:06.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:06.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:06.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:06.970 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:06.970 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:06.970 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:06.970 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:06.970 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:06.970 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:06.970 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:06.970 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 408062 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 408062 ']' 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 408062 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 408062 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 408062' 00:35:06.970 killing process with pid 408062 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 408062 00:35:06.970 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 408062 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 408062 ']' 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 408062 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 408062 ']' 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 408062 00:35:07.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (408062) - No such process 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 408062 is not found' 00:35:07.227 Process with pid 408062 is not found 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:07.227 00:35:07.227 real 0m15.911s 00:35:07.227 user 0m33.559s 00:35:07.227 sys 0m0.858s 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:07.227 11:21:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:07.227 ************************************ 00:35:07.227 END TEST spdkcli_nvmf_tcp 00:35:07.227 ************************************ 00:35:07.227 11:21:21 -- common/autotest_common.sh@1142 -- # return 0 00:35:07.227 11:21:21 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:07.227 11:21:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:07.227 11:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:07.227 11:21:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.227 ************************************ 00:35:07.227 START TEST nvmf_identify_passthru 00:35:07.227 ************************************ 00:35:07.227 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:07.227 * Looking for test storage... 00:35:07.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.227 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.227 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.485 11:21:21 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.485 11:21:21 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.485 11:21:21 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:07.485 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.485 11:21:21 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.485 11:21:21 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.485 11:21:21 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:07.485 11:21:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.485 11:21:21 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.485 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.485 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:07.485 11:21:21 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:07.485 11:21:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:09.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:09.380 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:09.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:09.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:09.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:35:09.380 00:35:09.380 --- 10.0.0.2 ping statistics --- 00:35:09.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.380 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:09.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:35:09.380 00:35:09.380 --- 10.0.0.1 ping statistics --- 00:35:09.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.380 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:09.380 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:09.381 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.381 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:09.381 11:21:23 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:09.381 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:09.381 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:09.381 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.639 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:09.639 11:21:23 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:09.639 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:09.639 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:09.639 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:09.639 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:09.639 11:21:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:09.639 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.817 11:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:13.817 11:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:13.817 11:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:13.817 11:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:13.817 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=413077 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:18.001 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 413077 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 413077 ']' 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:18.001 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.001 [2024-07-11 11:21:32.348481] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:35:18.001 [2024-07-11 11:21:32.348565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.001 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.001 [2024-07-11 11:21:32.413902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:18.259 [2024-07-11 11:21:32.503882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.259 [2024-07-11 11:21:32.503941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.259 [2024-07-11 11:21:32.503955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.259 [2024-07-11 11:21:32.503967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.259 [2024-07-11 11:21:32.503978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.259 [2024-07-11 11:21:32.504034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.259 [2024-07-11 11:21:32.504094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.259 [2024-07-11 11:21:32.504161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:18.259 [2024-07-11 11:21:32.504163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.259 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:18.259 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:18.259 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:18.259 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.259 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.259 INFO: Log level set to 20 00:35:18.259 INFO: Requests: 00:35:18.259 { 00:35:18.259 "jsonrpc": "2.0", 00:35:18.259 "method": "nvmf_set_config", 00:35:18.259 "id": 1, 00:35:18.259 "params": { 00:35:18.259 "admin_cmd_passthru": { 00:35:18.259 "identify_ctrlr": true 00:35:18.259 } 00:35:18.259 } 00:35:18.259 } 00:35:18.259 00:35:18.259 INFO: response: 00:35:18.259 { 00:35:18.259 "jsonrpc": "2.0", 00:35:18.259 "id": 1, 00:35:18.260 "result": true 00:35:18.260 } 00:35:18.260 00:35:18.260 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.260 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:18.260 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.260 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.260 INFO: Setting log level to 20 00:35:18.260 INFO: Setting log level to 20 00:35:18.260 INFO: Log level set to 20 00:35:18.260 INFO: Log level set to 20 00:35:18.260 INFO: Requests: 00:35:18.260 { 00:35:18.260 "jsonrpc": "2.0", 00:35:18.260 "method": "framework_start_init", 00:35:18.260 "id": 1 00:35:18.260 } 00:35:18.260 00:35:18.260 INFO: Requests: 00:35:18.260 { 00:35:18.260 "jsonrpc": "2.0", 00:35:18.260 "method": "framework_start_init", 00:35:18.260 "id": 1 00:35:18.260 } 00:35:18.260 00:35:18.260 [2024-07-11 11:21:32.671079] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:18.260 INFO: response: 00:35:18.260 { 00:35:18.260 "jsonrpc": "2.0", 00:35:18.260 "id": 1, 00:35:18.260 "result": true 00:35:18.260 } 00:35:18.260 00:35:18.260 INFO: response: 00:35:18.260 { 00:35:18.260 "jsonrpc": "2.0", 00:35:18.260 "id": 1, 00:35:18.260 "result": true 00:35:18.260 } 00:35:18.260 00:35:18.260 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.260 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:18.260 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.260 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.260 INFO: Setting log level to 40 00:35:18.260 INFO: Setting log level to 40 00:35:18.260 INFO: Setting log level to 40 00:35:18.260 [2024-07-11 11:21:32.681247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.517 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.517 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:18.517 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:18.517 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.517 11:21:32 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:18.517 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.517 11:21:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.806 Nvme0n1 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.806 [2024-07-11 11:21:35.571812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.806 [ 00:35:21.806 { 00:35:21.806 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:21.806 "subtype": "Discovery", 00:35:21.806 "listen_addresses": [], 00:35:21.806 "allow_any_host": true, 00:35:21.806 "hosts": [] 00:35:21.806 }, 00:35:21.806 { 00:35:21.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:21.806 "subtype": "NVMe", 00:35:21.806 "listen_addresses": [ 00:35:21.806 { 00:35:21.806 "trtype": "TCP", 00:35:21.806 "adrfam": "IPv4", 00:35:21.806 "traddr": "10.0.0.2", 00:35:21.806 "trsvcid": "4420" 00:35:21.806 } 00:35:21.806 ], 00:35:21.806 "allow_any_host": true, 00:35:21.806 "hosts": [], 00:35:21.806 "serial_number": "SPDK00000000000001", 00:35:21.806 "model_number": "SPDK bdev Controller", 00:35:21.806 "max_namespaces": 1, 00:35:21.806 "min_cntlid": 1, 00:35:21.806 "max_cntlid": 65519, 00:35:21.806 "namespaces": [ 00:35:21.806 { 00:35:21.806 "nsid": 1, 00:35:21.806 "bdev_name": "Nvme0n1", 00:35:21.806 "name": "Nvme0n1", 00:35:21.806 "nguid": "86D082124A8045659DF76AEC53EBDBC0", 00:35:21.806 "uuid": "86d08212-4a80-4565-9df7-6aec53ebdbc0" 00:35:21.806 } 00:35:21.806 ] 00:35:21.806 } 00:35:21.806 ] 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:21.806 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:21.806 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:21.806 11:21:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:21.806 11:21:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:21.806 11:21:35 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:21.806 11:21:35 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:21.806 11:21:35 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:21.806 11:21:35 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:21.806 11:21:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:21.806 11:21:35 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:21.806 rmmod nvme_tcp 00:35:21.806 rmmod nvme_fabrics 00:35:21.806 rmmod nvme_keyring 00:35:21.806 11:21:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:21.806 11:21:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:21.806 11:21:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:21.806 11:21:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 413077 ']' 00:35:21.806 11:21:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 413077 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 413077 ']' 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 413077 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 413077 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:21.806 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 413077' 00:35:21.806 killing process with pid 413077 00:35:21.807 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 413077 00:35:21.807 11:21:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 413077 00:35:23.203 11:21:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:23.203 11:21:37 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:23.203 11:21:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:23.203 11:21:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:23.203 11:21:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:23.203 11:21:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.203 11:21:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:23.203 11:21:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.741 11:21:39 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:25.741 00:35:25.741 real 0m18.046s 00:35:25.741 user 0m26.784s 00:35:25.741 sys 0m2.338s 00:35:25.741 11:21:39 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:25.741 11:21:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.741 ************************************ 00:35:25.741 END TEST nvmf_identify_passthru 00:35:25.741 ************************************ 00:35:25.741 11:21:39 -- common/autotest_common.sh@1142 -- # return 0 00:35:25.741 11:21:39 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:25.741 11:21:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:25.741 11:21:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:25.741 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:35:25.741 ************************************ 00:35:25.741 START TEST nvmf_dif 00:35:25.741 ************************************ 00:35:25.741 11:21:39 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:25.741 * Looking for test storage... 00:35:25.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:25.741 11:21:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.741 11:21:39 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.741 11:21:39 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.741 11:21:39 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.741 11:21:39 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.741 11:21:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.742 11:21:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.742 11:21:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.742 11:21:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:25.742 11:21:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:25.742 11:21:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:25.742 11:21:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:25.742 11:21:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:25.742 11:21:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:25.742 11:21:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.742 11:21:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:25.742 11:21:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:25.742 11:21:39 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:25.742 11:21:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:27.643 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:27.643 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:27.643 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:27.643 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:27.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:35:27.643 00:35:27.643 --- 10.0.0.2 ping statistics --- 00:35:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.643 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:35:27.643 00:35:27.643 --- 10.0.0.1 ping statistics --- 00:35:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.643 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:27.643 11:21:41 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:28.578 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:28.578 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:28.578 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:28.578 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:28.578 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:28.578 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:28.578 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:28.578 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:28.578 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:28.578 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:28.578 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:28.578 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:28.578 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:28.578 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:28.578 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:28.578 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:28.578 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:28.836 11:21:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:28.836 11:21:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=416339 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:28.836 11:21:43 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 416339 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 416339 ']' 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:28.836 11:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.836 [2024-07-11 11:21:43.210207] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:35:28.836 [2024-07-11 11:21:43.210285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.094 [2024-07-11 11:21:43.277208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.094 [2024-07-11 11:21:43.364818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.094 [2024-07-11 11:21:43.364896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.094 [2024-07-11 11:21:43.364910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.094 [2024-07-11 11:21:43.364921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.094 [2024-07-11 11:21:43.364931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.094 [2024-07-11 11:21:43.364963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:29.094 11:21:43 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.094 11:21:43 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.094 11:21:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:29.094 11:21:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.094 [2024-07-11 11:21:43.504663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.094 11:21:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:29.094 11:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.354 ************************************ 00:35:29.354 START TEST fio_dif_1_default 00:35:29.354 ************************************ 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.354 bdev_null0 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.354 [2024-07-11 11:21:43.560964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.354 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:29.355 { 00:35:29.355 "params": { 00:35:29.355 "name": "Nvme$subsystem", 00:35:29.355 "trtype": "$TEST_TRANSPORT", 00:35:29.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.355 "adrfam": "ipv4", 00:35:29.355 "trsvcid": "$NVMF_PORT", 00:35:29.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.355 "hdgst": ${hdgst:-false}, 00:35:29.355 "ddgst": ${ddgst:-false} 00:35:29.355 }, 00:35:29.355 "method": "bdev_nvme_attach_controller" 00:35:29.355 } 00:35:29.355 EOF 00:35:29.355 )") 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:29.355 "params": { 00:35:29.355 "name": "Nvme0", 00:35:29.355 "trtype": "tcp", 00:35:29.355 "traddr": "10.0.0.2", 00:35:29.355 "adrfam": "ipv4", 00:35:29.355 "trsvcid": "4420", 00:35:29.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.355 "hdgst": false, 00:35:29.355 "ddgst": false 00:35:29.355 }, 00:35:29.355 "method": "bdev_nvme_attach_controller" 00:35:29.355 }' 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.355 11:21:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.612 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:29.612 fio-3.35 00:35:29.612 Starting 1 thread 00:35:29.612 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.868 00:35:41.868 filename0: (groupid=0, jobs=1): err= 0: pid=416569: Thu Jul 11 11:21:54 2024 00:35:41.868 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10023msec) 00:35:41.868 slat (nsec): min=4007, max=30569, avg=9230.99, stdev=2674.53 00:35:41.868 clat (usec): min=40923, max=48202, avg=41905.09, stdev=499.65 00:35:41.868 lat (usec): min=40931, max=48215, avg=41914.32, stdev=499.73 00:35:41.868 clat percentiles (usec): 00:35:41.868 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:35:41.868 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:41.868 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:41.868 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:35:41.868 | 99.99th=[47973] 00:35:41.868 bw ( KiB/s): min= 352, max= 384, per=99.60%, avg=380.80, stdev= 9.85, samples=20 00:35:41.868 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:41.868 lat (msec) : 50=100.00% 00:35:41.868 cpu : usr=89.73%, sys=9.88%, ctx=25, majf=0, minf=236 00:35:41.868 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.868 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.868 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:41.868 00:35:41.868 Run status group 0 (all jobs): 00:35:41.868 READ: bw=382KiB/s (391kB/s), 382KiB/s-382KiB/s (391kB/s-391kB/s), io=3824KiB (3916kB), run=10023-10023msec 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.868 00:35:41.868 real 0m11.127s 00:35:41.868 user 0m10.306s 00:35:41.868 sys 0m1.252s 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:41.868 11:21:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.868 ************************************ 00:35:41.868 END TEST fio_dif_1_default 00:35:41.868 ************************************ 00:35:41.868 11:21:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:41.868 11:21:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:41.868 11:21:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:41.868 11:21:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 ************************************ 00:35:41.869 START TEST fio_dif_1_multi_subsystems 00:35:41.869 ************************************ 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 bdev_null0 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 [2024-07-11 11:21:54.738416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 bdev_null1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:41.869 { 00:35:41.869 "params": { 00:35:41.869 "name": "Nvme$subsystem", 00:35:41.869 "trtype": "$TEST_TRANSPORT", 00:35:41.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.869 "adrfam": "ipv4", 00:35:41.869 "trsvcid": "$NVMF_PORT", 00:35:41.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.869 "hdgst": ${hdgst:-false}, 00:35:41.869 "ddgst": ${ddgst:-false} 00:35:41.869 }, 00:35:41.869 "method": "bdev_nvme_attach_controller" 00:35:41.869 } 00:35:41.869 EOF 00:35:41.869 )") 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:41.869 { 00:35:41.869 "params": { 00:35:41.869 "name": "Nvme$subsystem", 00:35:41.869 "trtype": "$TEST_TRANSPORT", 00:35:41.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.869 "adrfam": "ipv4", 00:35:41.869 "trsvcid": "$NVMF_PORT", 00:35:41.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.869 "hdgst": ${hdgst:-false}, 00:35:41.869 "ddgst": ${ddgst:-false} 00:35:41.869 }, 00:35:41.869 "method": "bdev_nvme_attach_controller" 00:35:41.869 } 00:35:41.869 EOF 00:35:41.869 )") 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:41.869 "params": { 00:35:41.869 "name": "Nvme0", 00:35:41.869 "trtype": "tcp", 00:35:41.869 "traddr": "10.0.0.2", 00:35:41.869 "adrfam": "ipv4", 00:35:41.869 "trsvcid": "4420", 00:35:41.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:41.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:41.869 "hdgst": false, 00:35:41.869 "ddgst": false 00:35:41.869 }, 00:35:41.869 "method": "bdev_nvme_attach_controller" 00:35:41.869 },{ 00:35:41.869 "params": { 00:35:41.869 "name": "Nvme1", 00:35:41.869 "trtype": "tcp", 00:35:41.869 "traddr": "10.0.0.2", 00:35:41.869 "adrfam": "ipv4", 00:35:41.869 "trsvcid": "4420", 00:35:41.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:41.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:41.869 "hdgst": false, 00:35:41.869 "ddgst": false 00:35:41.869 }, 00:35:41.869 "method": "bdev_nvme_attach_controller" 00:35:41.869 }' 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:41.869 11:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.869 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:41.869 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:41.869 fio-3.35 00:35:41.869 Starting 2 threads 00:35:41.870 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.833 00:35:51.833 filename0: (groupid=0, jobs=1): err= 0: pid=417963: Thu Jul 11 11:22:05 2024 00:35:51.833 read: IOPS=98, BW=392KiB/s (402kB/s)(3936KiB/10033msec) 00:35:51.833 slat (nsec): min=4297, max=29684, avg=9728.88, stdev=2761.88 00:35:51.833 clat (usec): min=722, max=43148, avg=40750.61, stdev=3638.76 00:35:51.833 lat (usec): min=730, max=43176, avg=40760.34, stdev=3638.79 00:35:51.833 clat percentiles (usec): 00:35:51.833 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:51.833 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:51.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:35:51.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:51.833 | 99.99th=[43254] 00:35:51.833 bw ( KiB/s): min= 384, max= 416, per=20.99%, avg=392.00, stdev=14.22, samples=20 00:35:51.833 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:35:51.833 lat (usec) : 750=0.61%, 1000=0.20% 00:35:51.833 lat (msec) : 50=99.19% 00:35:51.833 cpu : usr=94.01%, sys=5.70%, ctx=37, majf=0, minf=81 00:35:51.833 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.833 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.833 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:51.833 filename1: (groupid=0, jobs=1): err= 0: pid=417964: Thu Jul 11 11:22:05 2024 00:35:51.833 read: IOPS=368, BW=1473KiB/s (1508kB/s)(14.4MiB/10015msec) 00:35:51.833 slat (nsec): min=4683, max=30486, avg=9633.87, stdev=2594.08 00:35:51.833 clat (usec): min=521, max=44042, avg=10831.10, stdev=17617.94 00:35:51.833 lat (usec): min=529, max=44055, avg=10840.73, stdev=17617.89 00:35:51.833 clat percentiles (usec): 00:35:51.833 | 1.00th=[ 553], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 586], 00:35:51.833 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[ 644], 00:35:51.833 | 70.00th=[ 668], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:51.833 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:35:51.833 | 99.99th=[44303] 00:35:51.833 bw ( KiB/s): min= 768, max= 2176, per=79.08%, avg=1473.60, stdev=427.62, samples=20 00:35:51.833 iops : min= 192, max= 544, avg=368.40, stdev=106.91, samples=20 00:35:51.833 lat (usec) : 750=72.13%, 1000=2.71% 00:35:51.833 lat (msec) : 50=25.16% 00:35:51.833 cpu : usr=94.03%, sys=5.45%, ctx=69, majf=0, minf=169 00:35:51.833 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.833 issued rwts: total=3688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.833 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:51.833 00:35:51.833 Run status group 0 (all jobs): 00:35:51.833 READ: bw=1863KiB/s (1907kB/s), 392KiB/s-1473KiB/s (402kB/s-1508kB/s), io=18.2MiB (19.1MB), run=10015-10033msec 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.833 11:22:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.834 00:35:51.834 real 0m11.305s 00:35:51.834 user 0m20.057s 00:35:51.834 sys 0m1.394s 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 ************************************ 00:35:51.834 END TEST fio_dif_1_multi_subsystems 00:35:51.834 ************************************ 00:35:51.834 11:22:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:51.834 11:22:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:51.834 11:22:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:51.834 11:22:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 ************************************ 00:35:51.834 START TEST fio_dif_rand_params 00:35:51.834 ************************************ 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 bdev_null0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.834 [2024-07-11 11:22:06.084328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.834 { 00:35:51.834 "params": { 00:35:51.834 "name": "Nvme$subsystem", 00:35:51.834 "trtype": "$TEST_TRANSPORT", 00:35:51.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.834 "adrfam": "ipv4", 00:35:51.834 "trsvcid": "$NVMF_PORT", 00:35:51.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.834 "hdgst": ${hdgst:-false}, 00:35:51.834 "ddgst": ${ddgst:-false} 00:35:51.834 }, 00:35:51.834 "method": "bdev_nvme_attach_controller" 00:35:51.834 } 00:35:51.834 EOF 00:35:51.834 )") 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:51.834 "params": { 00:35:51.834 "name": "Nvme0", 00:35:51.834 "trtype": "tcp", 00:35:51.834 "traddr": "10.0.0.2", 00:35:51.834 "adrfam": "ipv4", 00:35:51.834 "trsvcid": "4420", 00:35:51.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.834 "hdgst": false, 00:35:51.834 "ddgst": false 00:35:51.834 }, 00:35:51.834 "method": "bdev_nvme_attach_controller" 00:35:51.834 }' 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:51.834 11:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.092 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:52.092 ... 00:35:52.092 fio-3.35 00:35:52.092 Starting 3 threads 00:35:52.092 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.690 00:35:58.690 filename0: (groupid=0, jobs=1): err= 0: pid=419244: Thu Jul 11 11:22:12 2024 00:35:58.690 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(144MiB/5047msec) 00:35:58.690 slat (nsec): min=5933, max=86910, avg=16957.81, stdev=6867.88 00:35:58.690 clat (usec): min=4542, max=55734, avg=13106.97, stdev=4016.10 00:35:58.690 lat (usec): min=4553, max=55746, avg=13123.93, stdev=4015.74 00:35:58.690 clat percentiles (usec): 00:35:58.690 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:35:58.690 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:35:58.690 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[15401], 00:35:58.690 | 99.00th=[17433], 99.50th=[49546], 99.90th=[55313], 99.95th=[55837], 00:35:58.690 | 99.99th=[55837] 00:35:58.690 bw ( KiB/s): min=27648, max=30976, per=33.91%, avg=29363.20, stdev=1333.22, samples=10 00:35:58.690 iops : min= 216, max= 242, avg=229.40, stdev=10.42, samples=10 00:35:58.690 lat (msec) : 10=4.70%, 20=94.35%, 50=0.61%, 100=0.35% 00:35:58.690 cpu : usr=93.26%, sys=6.12%, ctx=44, majf=0, minf=131 00:35:58.690 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.691 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:58.691 filename0: (groupid=0, jobs=1): err= 0: pid=419245: Thu Jul 11 11:22:12 2024 00:35:58.691 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(139MiB/5004msec) 00:35:58.691 slat (nsec): min=6196, max=43863, avg=14107.06, stdev=3507.04 00:35:58.691 clat (usec): min=5448, max=50411, avg=13494.34, stdev=2640.32 00:35:58.691 lat (usec): min=5461, max=50422, avg=13508.45, stdev=2640.34 00:35:58.691 clat percentiles (usec): 00:35:58.691 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11207], 20.00th=[11863], 00:35:58.691 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[13960], 00:35:58.691 | 70.00th=[14484], 80.00th=[15139], 90.00th=[15664], 95.00th=[16319], 00:35:58.691 | 99.00th=[17433], 99.50th=[18220], 99.90th=[49021], 99.95th=[50594], 00:35:58.691 | 99.99th=[50594] 00:35:58.691 bw ( KiB/s): min=25907, max=30976, per=32.76%, avg=28369.90, stdev=1343.10, samples=10 00:35:58.691 iops : min= 202, max= 242, avg=221.60, stdev=10.57, samples=10 00:35:58.691 lat (msec) : 10=3.78%, 20=95.95%, 50=0.18%, 100=0.09% 00:35:58.691 cpu : usr=92.68%, sys=6.84%, ctx=14, majf=0, minf=122 00:35:58.691 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.691 issued rwts: total=1111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:58.691 filename0: (groupid=0, jobs=1): err= 0: pid=419246: Thu Jul 11 11:22:12 2024 00:35:58.691 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5045msec) 00:35:58.691 slat (nsec): min=6428, max=38955, avg=14270.70, stdev=3659.86 00:35:58.691 clat (usec): min=4852, max=54614, avg=13071.90, stdev=3850.02 00:35:58.691 lat (usec): min=4861, max=54630, avg=13086.17, stdev=3850.02 00:35:58.691 clat percentiles (usec): 00:35:58.691 | 1.00th=[ 7701], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11207], 00:35:58.691 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:35:58.691 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15401], 95.00th=[16319], 00:35:58.691 | 99.00th=[17695], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:35:58.691 | 99.99th=[54789] 00:35:58.691 bw ( KiB/s): min=26880, max=32000, per=34.01%, avg=29445.70, stdev=1831.61, samples=10 00:35:58.691 iops : min= 210, max= 250, avg=230.00, stdev=14.33, samples=10 00:35:58.691 lat (msec) : 10=5.72%, 20=93.58%, 50=0.26%, 100=0.43% 00:35:58.691 cpu : usr=92.45%, sys=7.06%, ctx=18, majf=0, minf=155 00:35:58.691 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.691 issued rwts: total=1153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:58.691 00:35:58.691 Run status group 0 (all jobs): 00:35:58.691 READ: bw=84.6MiB/s (88.7MB/s), 27.8MiB/s-28.6MiB/s (29.1MB/s-30.0MB/s), io=427MiB (447MB), run=5004-5047msec 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 bdev_null0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 [2024-07-11 11:22:12.266006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 bdev_null1 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 bdev_null2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.691 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.691 { 00:35:58.691 "params": { 00:35:58.691 "name": "Nvme$subsystem", 00:35:58.692 "trtype": "$TEST_TRANSPORT", 00:35:58.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.692 "adrfam": "ipv4", 00:35:58.692 "trsvcid": "$NVMF_PORT", 00:35:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.692 "hdgst": ${hdgst:-false}, 00:35:58.692 "ddgst": ${ddgst:-false} 00:35:58.692 }, 00:35:58.692 "method": "bdev_nvme_attach_controller" 00:35:58.692 } 00:35:58.692 EOF 00:35:58.692 )") 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.692 { 00:35:58.692 "params": { 00:35:58.692 "name": "Nvme$subsystem", 00:35:58.692 "trtype": "$TEST_TRANSPORT", 00:35:58.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.692 "adrfam": "ipv4", 00:35:58.692 "trsvcid": "$NVMF_PORT", 00:35:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.692 "hdgst": ${hdgst:-false}, 00:35:58.692 "ddgst": ${ddgst:-false} 00:35:58.692 }, 00:35:58.692 "method": "bdev_nvme_attach_controller" 00:35:58.692 } 00:35:58.692 EOF 00:35:58.692 )") 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.692 { 00:35:58.692 "params": { 00:35:58.692 "name": "Nvme$subsystem", 00:35:58.692 "trtype": "$TEST_TRANSPORT", 00:35:58.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.692 "adrfam": "ipv4", 00:35:58.692 "trsvcid": "$NVMF_PORT", 00:35:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.692 "hdgst": ${hdgst:-false}, 00:35:58.692 "ddgst": ${ddgst:-false} 00:35:58.692 }, 00:35:58.692 "method": "bdev_nvme_attach_controller" 00:35:58.692 } 00:35:58.692 EOF 00:35:58.692 )") 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:58.692 "params": { 00:35:58.692 "name": "Nvme0", 00:35:58.692 "trtype": "tcp", 00:35:58.692 "traddr": "10.0.0.2", 00:35:58.692 "adrfam": "ipv4", 00:35:58.692 "trsvcid": "4420", 00:35:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:58.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:58.692 "hdgst": false, 00:35:58.692 "ddgst": false 00:35:58.692 }, 00:35:58.692 "method": "bdev_nvme_attach_controller" 00:35:58.692 },{ 00:35:58.692 "params": { 00:35:58.692 "name": "Nvme1", 00:35:58.692 "trtype": "tcp", 00:35:58.692 "traddr": "10.0.0.2", 00:35:58.692 "adrfam": "ipv4", 00:35:58.692 "trsvcid": "4420", 00:35:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.692 "hdgst": false, 00:35:58.692 "ddgst": false 00:35:58.692 }, 00:35:58.692 "method": "bdev_nvme_attach_controller" 00:35:58.692 },{ 00:35:58.692 "params": { 00:35:58.692 "name": "Nvme2", 00:35:58.692 "trtype": "tcp", 00:35:58.692 "traddr": "10.0.0.2", 00:35:58.692 "adrfam": "ipv4", 00:35:58.692 "trsvcid": "4420", 00:35:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:58.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:58.692 "hdgst": false, 00:35:58.692 "ddgst": false 00:35:58.692 }, 00:35:58.692 "method": "bdev_nvme_attach_controller" 00:35:58.692 }' 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:58.692 11:22:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.692 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:58.692 ... 00:35:58.692 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:58.692 ... 00:35:58.692 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:58.692 ... 00:35:58.692 fio-3.35 00:35:58.692 Starting 24 threads 00:35:58.692 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.893 00:36:10.893 filename0: (groupid=0, jobs=1): err= 0: pid=420104: Thu Jul 11 11:22:23 2024 00:36:10.893 read: IOPS=241, BW=966KiB/s (989kB/s)(9672KiB/10011msec) 00:36:10.893 slat (usec): min=5, max=108, avg=21.03, stdev=15.88 00:36:10.893 clat (msec): min=18, max=379, avg=66.07, stdev=73.08 00:36:10.893 lat (msec): min=18, max=379, avg=66.09, stdev=73.09 00:36:10.893 clat percentiles (msec): 00:36:10.893 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.893 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.893 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 209], 95.00th=[ 247], 00:36:10.893 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 376], 99.95th=[ 380], 00:36:10.893 | 99.99th=[ 380] 00:36:10.893 bw ( KiB/s): min= 256, max= 2096, per=4.26%, avg=960.80, stdev=824.88, samples=20 00:36:10.893 iops : min= 64, max= 524, avg=240.20, stdev=206.22, samples=20 00:36:10.893 lat (msec) : 20=0.50%, 50=80.23%, 100=2.23%, 250=12.74%, 500=4.30% 00:36:10.893 cpu : usr=98.06%, sys=1.52%, ctx=31, majf=0, minf=35 00:36:10.893 IO depths : 1=5.1%, 2=10.8%, 4=23.2%, 8=53.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:10.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.893 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.893 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.893 filename0: (groupid=0, jobs=1): err= 0: pid=420105: Thu Jul 11 11:22:23 2024 00:36:10.893 read: IOPS=245, BW=981KiB/s (1005kB/s)(9840KiB/10027msec) 00:36:10.893 slat (nsec): min=8092, max=84353, avg=24697.11, stdev=13972.07 00:36:10.893 clat (msec): min=21, max=305, avg=65.02, stdev=67.14 00:36:10.893 lat (msec): min=21, max=305, avg=65.05, stdev=67.14 00:36:10.893 clat percentiles (msec): 00:36:10.893 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.893 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.893 | 70.00th=[ 34], 80.00th=[ 38], 90.00th=[ 199], 95.00th=[ 222], 00:36:10.893 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 279], 99.95th=[ 305], 00:36:10.893 | 99.99th=[ 305] 00:36:10.893 bw ( KiB/s): min= 256, max= 2048, per=4.33%, avg=977.60, stdev=810.32, samples=20 00:36:10.893 iops : min= 64, max= 512, avg=244.40, stdev=202.58, samples=20 00:36:10.893 lat (msec) : 50=80.65%, 100=0.16%, 250=18.37%, 500=0.81% 00:36:10.893 cpu : usr=98.11%, sys=1.28%, ctx=62, majf=0, minf=52 00:36:10.893 IO depths : 1=5.1%, 2=10.4%, 4=22.2%, 8=54.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:10.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.893 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.893 issued rwts: total=2460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.893 filename0: (groupid=0, jobs=1): err= 0: pid=420106: Thu Jul 11 11:22:23 2024 00:36:10.893 read: IOPS=231, BW=927KiB/s (950kB/s)(9280KiB/10008msec) 00:36:10.893 slat (nsec): min=8293, max=88752, avg=36663.11, stdev=12312.70 00:36:10.893 clat (msec): min=7, max=456, avg=68.69, stdev=87.06 00:36:10.893 lat (msec): min=7, max=456, avg=68.73, stdev=87.06 00:36:10.893 clat percentiles (msec): 00:36:10.893 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.893 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.893 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 309], 00:36:10.893 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 456], 00:36:10.893 | 99.99th=[ 456] 00:36:10.893 bw ( KiB/s): min= 128, max= 2048, per=3.85%, avg=869.05, stdev=841.08, samples=19 00:36:10.893 iops : min= 32, max= 512, avg=217.26, stdev=210.27, samples=19 00:36:10.893 lat (msec) : 10=0.69%, 50=83.45%, 100=0.69%, 250=5.60%, 500=9.57% 00:36:10.893 cpu : usr=98.13%, sys=1.41%, ctx=37, majf=0, minf=24 00:36:10.893 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:10.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.893 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.893 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.893 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.893 filename0: (groupid=0, jobs=1): err= 0: pid=420107: Thu Jul 11 11:22:23 2024 00:36:10.893 read: IOPS=229, BW=918KiB/s (940kB/s)(9176KiB/10001msec) 00:36:10.893 slat (usec): min=8, max=101, avg=38.49, stdev=16.48 00:36:10.893 clat (msec): min=20, max=475, avg=69.41, stdev=90.27 00:36:10.893 lat (msec): min=20, max=475, avg=69.45, stdev=90.27 00:36:10.893 clat percentiles (msec): 00:36:10.893 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.893 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.893 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 255], 95.00th=[ 313], 00:36:10.893 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 468], 99.95th=[ 477], 00:36:10.894 | 99.99th=[ 477] 00:36:10.894 bw ( KiB/s): min= 128, max= 2048, per=3.83%, avg=864.84, stdev=848.96, samples=19 00:36:10.894 iops : min= 32, max= 512, avg=216.21, stdev=212.24, samples=19 00:36:10.894 lat (msec) : 50=84.66%, 250=5.23%, 500=10.11% 00:36:10.894 cpu : usr=98.23%, sys=1.30%, ctx=26, majf=0, minf=28 00:36:10.894 IO depths : 1=5.3%, 2=11.2%, 4=23.6%, 8=52.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename0: (groupid=0, jobs=1): err= 0: pid=420108: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=239, BW=958KiB/s (981kB/s)(9600KiB/10021msec) 00:36:10.894 slat (nsec): min=8181, max=75995, avg=23721.23, stdev=10784.30 00:36:10.894 clat (msec): min=19, max=310, avg=66.58, stdev=73.04 00:36:10.894 lat (msec): min=19, max=310, avg=66.60, stdev=73.04 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 207], 95.00th=[ 249], 00:36:10.894 | 99.00th=[ 275], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:36:10.894 | 99.99th=[ 309] 00:36:10.894 bw ( KiB/s): min= 240, max= 2048, per=4.23%, avg=953.60, stdev=824.83, samples=20 00:36:10.894 iops : min= 60, max= 512, avg=238.40, stdev=206.21, samples=20 00:36:10.894 lat (msec) : 20=0.08%, 50=81.25%, 250=13.67%, 500=5.00% 00:36:10.894 cpu : usr=97.52%, sys=1.67%, ctx=100, majf=0, minf=45 00:36:10.894 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename0: (groupid=0, jobs=1): err= 0: pid=420109: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=230, BW=921KiB/s (943kB/s)(9216KiB/10009msec) 00:36:10.894 slat (usec): min=8, max=104, avg=35.56, stdev=15.29 00:36:10.894 clat (msec): min=7, max=488, avg=69.15, stdev=91.39 00:36:10.894 lat (msec): min=7, max=488, avg=69.18, stdev=91.39 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 255], 95.00th=[ 313], 00:36:10.894 | 99.00th=[ 363], 99.50th=[ 409], 99.90th=[ 451], 99.95th=[ 489], 00:36:10.894 | 99.99th=[ 489] 00:36:10.894 bw ( KiB/s): min= 128, max= 2048, per=3.82%, avg=862.32, stdev=847.70, samples=19 00:36:10.894 iops : min= 32, max= 512, avg=215.58, stdev=211.92, samples=19 00:36:10.894 lat (msec) : 10=0.69%, 50=84.11%, 100=0.61%, 250=3.73%, 500=10.85% 00:36:10.894 cpu : usr=97.39%, sys=1.83%, ctx=84, majf=0, minf=28 00:36:10.894 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename0: (groupid=0, jobs=1): err= 0: pid=420110: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=231, BW=927KiB/s (950kB/s)(9280KiB/10006msec) 00:36:10.894 slat (nsec): min=8419, max=96391, avg=36657.07, stdev=14078.37 00:36:10.894 clat (msec): min=21, max=445, avg=68.69, stdev=85.25 00:36:10.894 lat (msec): min=21, max=445, avg=68.72, stdev=85.25 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 243], 95.00th=[ 288], 00:36:10.894 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 397], 99.95th=[ 447], 00:36:10.894 | 99.99th=[ 447] 00:36:10.894 bw ( KiB/s): min= 144, max= 1920, per=4.08%, avg=921.60, stdev=838.07, samples=20 00:36:10.894 iops : min= 36, max= 480, avg=230.40, stdev=209.52, samples=20 00:36:10.894 lat (msec) : 50=83.53%, 100=0.60%, 250=7.16%, 500=8.71% 00:36:10.894 cpu : usr=96.48%, sys=2.22%, ctx=518, majf=0, minf=28 00:36:10.894 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename0: (groupid=0, jobs=1): err= 0: pid=420111: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=235, BW=944KiB/s (967kB/s)(9456KiB/10018msec) 00:36:10.894 slat (usec): min=8, max=116, avg=24.82, stdev=16.00 00:36:10.894 clat (msec): min=20, max=388, avg=67.61, stdev=80.09 00:36:10.894 lat (msec): min=20, max=388, avg=67.63, stdev=80.09 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 234], 95.00th=[ 266], 00:36:10.894 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 384], 99.95th=[ 388], 00:36:10.894 | 99.99th=[ 388] 00:36:10.894 bw ( KiB/s): min= 144, max= 2048, per=4.16%, avg=939.20, stdev=841.93, samples=20 00:36:10.894 iops : min= 36, max= 512, avg=234.80, stdev=210.48, samples=20 00:36:10.894 lat (msec) : 50=82.83%, 100=0.42%, 250=9.73%, 500=7.02% 00:36:10.894 cpu : usr=98.22%, sys=1.31%, ctx=45, majf=0, minf=34 00:36:10.894 IO depths : 1=5.4%, 2=11.2%, 4=23.6%, 8=52.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename1: (groupid=0, jobs=1): err= 0: pid=420112: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=239, BW=959KiB/s (982kB/s)(9600KiB/10013msec) 00:36:10.894 slat (nsec): min=8178, max=66291, avg=32515.61, stdev=11957.85 00:36:10.894 clat (msec): min=27, max=273, avg=66.46, stdev=72.21 00:36:10.894 lat (msec): min=27, max=273, avg=66.50, stdev=72.20 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 213], 95.00th=[ 251], 00:36:10.894 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:36:10.894 | 99.99th=[ 275] 00:36:10.894 bw ( KiB/s): min= 256, max= 2000, per=4.23%, avg=953.60, stdev=819.28, samples=20 00:36:10.894 iops : min= 64, max= 500, avg=238.40, stdev=204.82, samples=20 00:36:10.894 lat (msec) : 50=80.75%, 100=1.25%, 250=13.25%, 500=4.75% 00:36:10.894 cpu : usr=98.08%, sys=1.55%, ctx=10, majf=0, minf=34 00:36:10.894 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename1: (groupid=0, jobs=1): err= 0: pid=420113: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=230, BW=921KiB/s (943kB/s)(9216KiB/10003msec) 00:36:10.894 slat (nsec): min=8376, max=64713, avg=24296.91, stdev=10373.38 00:36:10.894 clat (msec): min=23, max=461, avg=69.23, stdev=88.52 00:36:10.894 lat (msec): min=23, max=461, avg=69.25, stdev=88.52 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 264], 95.00th=[ 309], 00:36:10.894 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 464], 00:36:10.894 | 99.99th=[ 464] 00:36:10.894 bw ( KiB/s): min= 128, max= 2048, per=3.82%, avg=862.32, stdev=845.58, samples=19 00:36:10.894 iops : min= 32, max= 512, avg=215.58, stdev=211.39, samples=19 00:36:10.894 lat (msec) : 50=84.72%, 250=4.95%, 500=10.33% 00:36:10.894 cpu : usr=97.90%, sys=1.54%, ctx=65, majf=0, minf=32 00:36:10.894 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename1: (groupid=0, jobs=1): err= 0: pid=420114: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=239, BW=960KiB/s (983kB/s)(9608KiB/10013msec) 00:36:10.894 slat (usec): min=8, max=108, avg=34.55, stdev=15.73 00:36:10.894 clat (msec): min=19, max=365, avg=66.39, stdev=74.98 00:36:10.894 lat (msec): min=20, max=365, avg=66.42, stdev=74.98 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 209], 95.00th=[ 255], 00:36:10.894 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 368], 99.95th=[ 368], 00:36:10.894 | 99.99th=[ 368] 00:36:10.894 bw ( KiB/s): min= 256, max= 2176, per=4.23%, avg=954.40, stdev=838.36, samples=20 00:36:10.894 iops : min= 64, max= 544, avg=238.60, stdev=209.59, samples=20 00:36:10.894 lat (msec) : 20=0.04%, 50=81.89%, 100=0.25%, 250=12.74%, 500=5.08% 00:36:10.894 cpu : usr=98.19%, sys=1.24%, ctx=68, majf=0, minf=40 00:36:10.894 IO depths : 1=5.5%, 2=11.2%, 4=23.3%, 8=53.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:10.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.894 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.894 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.894 filename1: (groupid=0, jobs=1): err= 0: pid=420115: Thu Jul 11 11:22:23 2024 00:36:10.894 read: IOPS=241, BW=965KiB/s (988kB/s)(9664KiB/10013msec) 00:36:10.894 slat (nsec): min=8382, max=94006, avg=32713.71, stdev=13276.57 00:36:10.894 clat (msec): min=29, max=334, avg=66.03, stdev=69.29 00:36:10.894 lat (msec): min=29, max=334, avg=66.06, stdev=69.28 00:36:10.894 clat percentiles (msec): 00:36:10.894 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.894 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.894 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 201], 95.00th=[ 236], 00:36:10.894 | 99.00th=[ 259], 99.50th=[ 264], 99.90th=[ 288], 99.95th=[ 334], 00:36:10.894 | 99.99th=[ 334] 00:36:10.894 bw ( KiB/s): min= 256, max= 1936, per=4.26%, avg=960.00, stdev=807.19, samples=20 00:36:10.895 iops : min= 64, max= 484, avg=240.00, stdev=201.80, samples=20 00:36:10.895 lat (msec) : 50=80.22%, 100=0.75%, 250=16.89%, 500=2.15% 00:36:10.895 cpu : usr=98.14%, sys=1.43%, ctx=20, majf=0, minf=25 00:36:10.895 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename1: (groupid=0, jobs=1): err= 0: pid=420116: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=231, BW=928KiB/s (950kB/s)(9280KiB/10004msec) 00:36:10.895 slat (nsec): min=3869, max=72022, avg=34444.54, stdev=11695.14 00:36:10.895 clat (msec): min=23, max=467, avg=68.70, stdev=85.74 00:36:10.895 lat (msec): min=23, max=467, avg=68.73, stdev=85.74 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 245], 95.00th=[ 292], 00:36:10.895 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 460], 99.95th=[ 468], 00:36:10.895 | 99.99th=[ 468] 00:36:10.895 bw ( KiB/s): min= 128, max= 1920, per=3.85%, avg=869.05, stdev=826.75, samples=19 00:36:10.895 iops : min= 32, max= 480, avg=217.26, stdev=206.69, samples=19 00:36:10.895 lat (msec) : 50=83.53%, 100=0.60%, 250=6.21%, 500=9.66% 00:36:10.895 cpu : usr=98.33%, sys=1.22%, ctx=50, majf=0, minf=28 00:36:10.895 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename1: (groupid=0, jobs=1): err= 0: pid=420117: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=231, BW=928KiB/s (950kB/s)(9280KiB/10001msec) 00:36:10.895 slat (nsec): min=6575, max=97056, avg=38252.93, stdev=14368.72 00:36:10.895 clat (msec): min=21, max=492, avg=68.63, stdev=85.75 00:36:10.895 lat (msec): min=21, max=492, avg=68.67, stdev=85.76 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 236], 95.00th=[ 288], 00:36:10.895 | 99.00th=[ 355], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 493], 00:36:10.895 | 99.99th=[ 493] 00:36:10.895 bw ( KiB/s): min= 128, max= 2048, per=3.88%, avg=875.79, stdev=837.21, samples=19 00:36:10.895 iops : min= 32, max= 512, avg=218.95, stdev=209.30, samples=19 00:36:10.895 lat (msec) : 50=83.45%, 100=0.69%, 250=7.93%, 500=7.93% 00:36:10.895 cpu : usr=97.90%, sys=1.44%, ctx=85, majf=0, minf=29 00:36:10.895 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename1: (groupid=0, jobs=1): err= 0: pid=420118: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=231, BW=926KiB/s (949kB/s)(9280KiB/10018msec) 00:36:10.895 slat (usec): min=8, max=115, avg=36.76, stdev=21.98 00:36:10.895 clat (msec): min=21, max=451, avg=68.79, stdev=85.57 00:36:10.895 lat (msec): min=21, max=451, avg=68.82, stdev=85.56 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 245], 95.00th=[ 305], 00:36:10.895 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 397], 99.95th=[ 451], 00:36:10.895 | 99.99th=[ 451] 00:36:10.895 bw ( KiB/s): min= 128, max= 2048, per=4.08%, avg=921.60, stdev=851.58, samples=20 00:36:10.895 iops : min= 32, max= 512, avg=230.40, stdev=212.90, samples=20 00:36:10.895 lat (msec) : 50=83.53%, 100=0.60%, 250=7.07%, 500=8.79% 00:36:10.895 cpu : usr=98.40%, sys=1.19%, ctx=15, majf=0, minf=23 00:36:10.895 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename1: (groupid=0, jobs=1): err= 0: pid=420119: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=244, BW=977KiB/s (1000kB/s)(9792KiB/10027msec) 00:36:10.895 slat (usec): min=7, max=101, avg=32.04, stdev=14.44 00:36:10.895 clat (msec): min=32, max=262, avg=65.26, stdev=67.87 00:36:10.895 lat (msec): min=32, max=262, avg=65.30, stdev=67.86 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 34], 80.00th=[ 50], 90.00th=[ 199], 95.00th=[ 234], 00:36:10.895 | 99.00th=[ 259], 99.50th=[ 262], 99.90th=[ 264], 99.95th=[ 264], 00:36:10.895 | 99.99th=[ 264] 00:36:10.895 bw ( KiB/s): min= 256, max= 2048, per=4.31%, avg=972.80, stdev=809.97, samples=20 00:36:10.895 iops : min= 64, max= 512, avg=243.20, stdev=202.49, samples=20 00:36:10.895 lat (msec) : 50=80.39%, 100=0.65%, 250=16.99%, 500=1.96% 00:36:10.895 cpu : usr=98.16%, sys=1.44%, ctx=17, majf=0, minf=29 00:36:10.895 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename2: (groupid=0, jobs=1): err= 0: pid=420120: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=230, BW=920KiB/s (942kB/s)(9216KiB/10013msec) 00:36:10.895 slat (usec): min=8, max=120, avg=40.32, stdev=17.47 00:36:10.895 clat (msec): min=32, max=409, avg=69.16, stdev=87.84 00:36:10.895 lat (msec): min=32, max=409, avg=69.20, stdev=87.85 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 309], 00:36:10.895 | 99.00th=[ 342], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:36:10.895 | 99.99th=[ 409] 00:36:10.895 bw ( KiB/s): min= 128, max= 1920, per=4.06%, avg=915.20, stdev=843.22, samples=20 00:36:10.895 iops : min= 32, max= 480, avg=228.80, stdev=210.81, samples=20 00:36:10.895 lat (msec) : 50=84.03%, 250=6.25%, 500=9.72% 00:36:10.895 cpu : usr=97.21%, sys=1.79%, ctx=145, majf=0, minf=26 00:36:10.895 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename2: (groupid=0, jobs=1): err= 0: pid=420121: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=231, BW=928KiB/s (950kB/s)(9280KiB/10003msec) 00:36:10.895 slat (usec): min=8, max=104, avg=37.89, stdev=13.47 00:36:10.895 clat (msec): min=22, max=462, avg=68.65, stdev=86.40 00:36:10.895 lat (msec): min=22, max=462, avg=68.69, stdev=86.40 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 245], 95.00th=[ 296], 00:36:10.895 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 451], 99.95th=[ 464], 00:36:10.895 | 99.99th=[ 464] 00:36:10.895 bw ( KiB/s): min= 128, max= 1920, per=3.85%, avg=869.05, stdev=826.63, samples=19 00:36:10.895 iops : min= 32, max= 480, avg=217.26, stdev=206.66, samples=19 00:36:10.895 lat (msec) : 50=83.88%, 100=0.26%, 250=6.81%, 500=9.05% 00:36:10.895 cpu : usr=98.04%, sys=1.51%, ctx=17, majf=0, minf=29 00:36:10.895 IO depths : 1=5.7%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename2: (groupid=0, jobs=1): err= 0: pid=420122: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=230, BW=921KiB/s (943kB/s)(9216KiB/10007msec) 00:36:10.895 slat (usec): min=8, max=120, avg=34.59, stdev=13.71 00:36:10.895 clat (msec): min=11, max=475, avg=69.17, stdev=90.17 00:36:10.895 lat (msec): min=11, max=475, avg=69.20, stdev=90.17 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 255], 95.00th=[ 313], 00:36:10.895 | 99.00th=[ 351], 99.50th=[ 409], 99.90th=[ 468], 99.95th=[ 477], 00:36:10.895 | 99.99th=[ 477] 00:36:10.895 bw ( KiB/s): min= 128, max= 2048, per=3.82%, avg=862.32, stdev=845.56, samples=19 00:36:10.895 iops : min= 32, max= 512, avg=215.58, stdev=211.39, samples=19 00:36:10.895 lat (msec) : 20=0.69%, 50=84.03%, 250=5.21%, 500=10.07% 00:36:10.895 cpu : usr=98.15%, sys=1.40%, ctx=25, majf=0, minf=25 00:36:10.895 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:10.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.895 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.895 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.895 filename2: (groupid=0, jobs=1): err= 0: pid=420123: Thu Jul 11 11:22:23 2024 00:36:10.895 read: IOPS=230, BW=921KiB/s (943kB/s)(9220KiB/10009msec) 00:36:10.895 slat (usec): min=8, max=107, avg=35.85, stdev=13.88 00:36:10.895 clat (msec): min=8, max=467, avg=69.08, stdev=87.43 00:36:10.895 lat (msec): min=8, max=467, avg=69.11, stdev=87.43 00:36:10.895 clat percentiles (msec): 00:36:10.895 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.895 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.895 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 249], 95.00th=[ 309], 00:36:10.895 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 376], 99.95th=[ 468], 00:36:10.895 | 99.99th=[ 468] 00:36:10.895 bw ( KiB/s): min= 128, max= 2048, per=3.85%, avg=869.05, stdev=841.06, samples=19 00:36:10.895 iops : min= 32, max= 512, avg=217.26, stdev=210.27, samples=19 00:36:10.895 lat (msec) : 10=0.04%, 50=84.08%, 100=0.61%, 250=5.55%, 500=9.72% 00:36:10.895 cpu : usr=97.23%, sys=1.86%, ctx=96, majf=0, minf=31 00:36:10.896 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:10.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.896 filename2: (groupid=0, jobs=1): err= 0: pid=420124: Thu Jul 11 11:22:23 2024 00:36:10.896 read: IOPS=239, BW=958KiB/s (981kB/s)(9600KiB/10025msec) 00:36:10.896 slat (usec): min=4, max=103, avg=36.32, stdev=15.54 00:36:10.896 clat (msec): min=26, max=310, avg=66.51, stdev=74.16 00:36:10.896 lat (msec): min=26, max=310, avg=66.55, stdev=74.16 00:36:10.896 clat percentiles (msec): 00:36:10.896 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.896 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.896 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 215], 95.00th=[ 255], 00:36:10.896 | 99.00th=[ 288], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 313], 00:36:10.896 | 99.99th=[ 313] 00:36:10.896 bw ( KiB/s): min= 256, max= 2048, per=4.23%, avg=953.60, stdev=825.75, samples=20 00:36:10.896 iops : min= 64, max= 512, avg=238.40, stdev=206.44, samples=20 00:36:10.896 lat (msec) : 50=80.75%, 100=2.50%, 250=10.67%, 500=6.08% 00:36:10.896 cpu : usr=97.58%, sys=1.60%, ctx=85, majf=0, minf=34 00:36:10.896 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:10.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.896 filename2: (groupid=0, jobs=1): err= 0: pid=420125: Thu Jul 11 11:22:23 2024 00:36:10.896 read: IOPS=231, BW=926KiB/s (949kB/s)(9280KiB/10018msec) 00:36:10.896 slat (usec): min=8, max=126, avg=43.78, stdev=28.06 00:36:10.896 clat (msec): min=21, max=386, avg=68.71, stdev=85.28 00:36:10.896 lat (msec): min=21, max=386, avg=68.75, stdev=85.27 00:36:10.896 clat percentiles (msec): 00:36:10.896 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.896 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.896 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 234], 95.00th=[ 305], 00:36:10.896 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 388], 00:36:10.896 | 99.99th=[ 388] 00:36:10.896 bw ( KiB/s): min= 128, max= 2048, per=4.08%, avg=921.60, stdev=851.58, samples=20 00:36:10.896 iops : min= 32, max= 512, avg=230.40, stdev=212.90, samples=20 00:36:10.896 lat (msec) : 50=83.53%, 100=0.60%, 250=6.81%, 500=9.05% 00:36:10.896 cpu : usr=97.27%, sys=1.95%, ctx=55, majf=0, minf=27 00:36:10.896 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:10.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.896 filename2: (groupid=0, jobs=1): err= 0: pid=420126: Thu Jul 11 11:22:23 2024 00:36:10.896 read: IOPS=241, BW=965KiB/s (988kB/s)(9672KiB/10021msec) 00:36:10.896 slat (usec): min=8, max=110, avg=32.07, stdev=21.13 00:36:10.896 clat (msec): min=18, max=364, avg=66.03, stdev=71.87 00:36:10.896 lat (msec): min=18, max=364, avg=66.06, stdev=71.87 00:36:10.896 clat percentiles (msec): 00:36:10.896 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.896 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.896 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 207], 95.00th=[ 239], 00:36:10.896 | 99.00th=[ 292], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 363], 00:36:10.896 | 99.99th=[ 363] 00:36:10.896 bw ( KiB/s): min= 256, max= 2048, per=4.26%, avg=960.80, stdev=823.69, samples=20 00:36:10.896 iops : min= 64, max= 512, avg=240.20, stdev=205.92, samples=20 00:36:10.896 lat (msec) : 20=0.45%, 50=80.52%, 100=0.41%, 250=15.80%, 500=2.81% 00:36:10.896 cpu : usr=97.81%, sys=1.54%, ctx=73, majf=0, minf=21 00:36:10.896 IO depths : 1=5.7%, 2=11.5%, 4=23.6%, 8=52.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:10.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.896 filename2: (groupid=0, jobs=1): err= 0: pid=420127: Thu Jul 11 11:22:23 2024 00:36:10.896 read: IOPS=236, BW=947KiB/s (969kB/s)(9496KiB/10030msec) 00:36:10.896 slat (usec): min=3, max=122, avg=52.84, stdev=26.03 00:36:10.896 clat (msec): min=4, max=465, avg=67.12, stdev=87.14 00:36:10.896 lat (msec): min=4, max=465, avg=67.17, stdev=87.15 00:36:10.896 clat percentiles (msec): 00:36:10.896 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:36:10.896 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:10.896 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 251], 95.00th=[ 309], 00:36:10.896 | 99.00th=[ 342], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 468], 00:36:10.896 | 99.99th=[ 468] 00:36:10.896 bw ( KiB/s): min= 128, max= 2048, per=4.18%, avg=943.20, stdev=843.79, samples=20 00:36:10.896 iops : min= 32, max= 512, avg=235.80, stdev=210.95, samples=20 00:36:10.896 lat (msec) : 10=1.26%, 20=0.34%, 50=82.65%, 100=1.18%, 250=4.47% 00:36:10.896 lat (msec) : 500=10.11% 00:36:10.896 cpu : usr=98.09%, sys=1.40%, ctx=36, majf=0, minf=27 00:36:10.896 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:10.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.896 issued rwts: total=2374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.896 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:10.896 00:36:10.896 Run status group 0 (all jobs): 00:36:10.896 READ: bw=22.0MiB/s (23.1MB/s), 918KiB/s-981KiB/s (940kB/s-1005kB/s), io=221MiB (232MB), run=10001-10030msec 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 bdev_null0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.896 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.897 [2024-07-11 11:22:23.796543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.897 bdev_null1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:10.897 { 00:36:10.897 "params": { 00:36:10.897 "name": "Nvme$subsystem", 00:36:10.897 "trtype": "$TEST_TRANSPORT", 00:36:10.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.897 "adrfam": "ipv4", 00:36:10.897 "trsvcid": "$NVMF_PORT", 00:36:10.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.897 "hdgst": ${hdgst:-false}, 00:36:10.897 "ddgst": ${ddgst:-false} 00:36:10.897 }, 00:36:10.897 "method": "bdev_nvme_attach_controller" 00:36:10.897 } 00:36:10.897 EOF 00:36:10.897 )") 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:10.897 { 00:36:10.897 "params": { 00:36:10.897 "name": "Nvme$subsystem", 00:36:10.897 "trtype": "$TEST_TRANSPORT", 00:36:10.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.897 "adrfam": "ipv4", 00:36:10.897 "trsvcid": "$NVMF_PORT", 00:36:10.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.897 "hdgst": ${hdgst:-false}, 00:36:10.897 "ddgst": ${ddgst:-false} 00:36:10.897 }, 00:36:10.897 "method": "bdev_nvme_attach_controller" 00:36:10.897 } 00:36:10.897 EOF 00:36:10.897 )") 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:10.897 "params": { 00:36:10.897 "name": "Nvme0", 00:36:10.897 "trtype": "tcp", 00:36:10.897 "traddr": "10.0.0.2", 00:36:10.897 "adrfam": "ipv4", 00:36:10.897 "trsvcid": "4420", 00:36:10.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.897 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:10.897 "hdgst": false, 00:36:10.897 "ddgst": false 00:36:10.897 }, 00:36:10.897 "method": "bdev_nvme_attach_controller" 00:36:10.897 },{ 00:36:10.897 "params": { 00:36:10.897 "name": "Nvme1", 00:36:10.897 "trtype": "tcp", 00:36:10.897 "traddr": "10.0.0.2", 00:36:10.897 "adrfam": "ipv4", 00:36:10.897 "trsvcid": "4420", 00:36:10.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:10.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:10.897 "hdgst": false, 00:36:10.897 "ddgst": false 00:36:10.897 }, 00:36:10.897 "method": "bdev_nvme_attach_controller" 00:36:10.897 }' 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:10.897 11:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.897 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:10.897 ... 00:36:10.897 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:10.897 ... 00:36:10.897 fio-3.35 00:36:10.897 Starting 4 threads 00:36:10.897 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.179 00:36:16.179 filename0: (groupid=0, jobs=1): err= 0: pid=421511: Thu Jul 11 11:22:29 2024 00:36:16.179 read: IOPS=1910, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5002msec) 00:36:16.179 slat (usec): min=7, max=111, avg=17.58, stdev= 9.58 00:36:16.179 clat (usec): min=637, max=7750, avg=4122.56, stdev=541.74 00:36:16.179 lat (usec): min=652, max=7772, avg=4140.14, stdev=543.00 00:36:16.180 clat percentiles (usec): 00:36:16.180 | 1.00th=[ 2343], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3851], 00:36:16.180 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:16.180 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4817], 00:36:16.180 | 99.00th=[ 5735], 99.50th=[ 6390], 99.90th=[ 7242], 99.95th=[ 7570], 00:36:16.180 | 99.99th=[ 7767] 00:36:16.180 bw ( KiB/s): min=14720, max=16320, per=25.80%, avg=15281.50, stdev=457.31, samples=10 00:36:16.180 iops : min= 1840, max= 2040, avg=1910.10, stdev=57.09, samples=10 00:36:16.180 lat (usec) : 750=0.01%, 1000=0.04% 00:36:16.180 lat (msec) : 2=0.43%, 4=26.99%, 10=72.53% 00:36:16.180 cpu : usr=91.38%, sys=6.22%, ctx=115, majf=0, minf=10 00:36:16.180 IO depths : 1=0.6%, 2=15.9%, 4=57.3%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 issued rwts: total=9557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.180 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.180 filename0: (groupid=0, jobs=1): err= 0: pid=421512: Thu Jul 11 11:22:29 2024 00:36:16.180 read: IOPS=1843, BW=14.4MiB/s (15.1MB/s)(72.0MiB/5001msec) 00:36:16.180 slat (nsec): min=7242, max=69003, avg=17947.12, stdev=10787.80 00:36:16.180 clat (usec): min=927, max=7634, avg=4275.89, stdev=602.84 00:36:16.180 lat (usec): min=940, max=7642, avg=4293.84, stdev=602.53 00:36:16.180 clat percentiles (usec): 00:36:16.180 | 1.00th=[ 2638], 5.00th=[ 3458], 10.00th=[ 3752], 20.00th=[ 4047], 00:36:16.180 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:16.180 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5342], 00:36:16.180 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7373], 00:36:16.180 | 99.99th=[ 7635] 00:36:16.180 bw ( KiB/s): min=14288, max=15344, per=24.86%, avg=14728.44, stdev=379.58, samples=9 00:36:16.180 iops : min= 1786, max= 1918, avg=1841.00, stdev=47.51, samples=9 00:36:16.180 lat (usec) : 1000=0.04% 00:36:16.180 lat (msec) : 2=0.43%, 4=17.46%, 10=82.07% 00:36:16.180 cpu : usr=94.90%, sys=4.62%, ctx=10, majf=0, minf=9 00:36:16.180 IO depths : 1=0.5%, 2=14.2%, 4=58.3%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 issued rwts: total=9217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.180 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.180 filename1: (groupid=0, jobs=1): err= 0: pid=421513: Thu Jul 11 11:22:29 2024 00:36:16.180 read: IOPS=1807, BW=14.1MiB/s (14.8MB/s)(71.2MiB/5041msec) 00:36:16.180 slat (nsec): min=7349, max=68878, avg=18108.65, stdev=10992.03 00:36:16.180 clat (usec): min=815, max=41745, avg=4328.90, stdev=753.45 00:36:16.180 lat (usec): min=827, max=41759, avg=4347.01, stdev=752.98 00:36:16.180 clat percentiles (usec): 00:36:16.180 | 1.00th=[ 2802], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 4047], 00:36:16.180 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:16.180 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 5014], 95.00th=[ 5604], 00:36:16.180 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7504], 99.95th=[ 7570], 00:36:16.180 | 99.99th=[41681] 00:36:16.180 bw ( KiB/s): min=14240, max=14976, per=24.61%, avg=14580.80, stdev=245.57, samples=10 00:36:16.180 iops : min= 1780, max= 1872, avg=1822.60, stdev=30.70, samples=10 00:36:16.180 lat (usec) : 1000=0.07% 00:36:16.180 lat (msec) : 2=0.38%, 4=15.71%, 10=83.83%, 50=0.01% 00:36:16.180 cpu : usr=94.56%, sys=4.98%, ctx=6, majf=0, minf=9 00:36:16.180 IO depths : 1=0.3%, 2=12.4%, 4=59.7%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 issued rwts: total=9114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.180 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.180 filename1: (groupid=0, jobs=1): err= 0: pid=421514: Thu Jul 11 11:22:29 2024 00:36:16.180 read: IOPS=1886, BW=14.7MiB/s (15.5MB/s)(73.8MiB/5003msec) 00:36:16.180 slat (nsec): min=7328, max=68746, avg=16310.29, stdev=10037.57 00:36:16.180 clat (usec): min=901, max=7866, avg=4182.74, stdev=557.76 00:36:16.180 lat (usec): min=918, max=7874, avg=4199.05, stdev=558.24 00:36:16.180 clat percentiles (usec): 00:36:16.180 | 1.00th=[ 2671], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3916], 00:36:16.180 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:36:16.180 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5080], 00:36:16.180 | 99.00th=[ 5997], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7635], 00:36:16.180 | 99.99th=[ 7898] 00:36:16.180 bw ( KiB/s): min=14640, max=15760, per=25.49%, avg=15102.40, stdev=385.22, samples=10 00:36:16.180 iops : min= 1830, max= 1970, avg=1887.80, stdev=48.15, samples=10 00:36:16.180 lat (usec) : 1000=0.03% 00:36:16.180 lat (msec) : 2=0.21%, 4=24.09%, 10=75.67% 00:36:16.180 cpu : usr=95.00%, sys=4.50%, ctx=7, majf=0, minf=0 00:36:16.180 IO depths : 1=0.5%, 2=13.5%, 4=58.9%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.180 issued rwts: total=9440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.180 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:16.180 00:36:16.180 Run status group 0 (all jobs): 00:36:16.180 READ: bw=57.9MiB/s (60.7MB/s), 14.1MiB/s-14.9MiB/s (14.8MB/s-15.7MB/s), io=292MiB (306MB), run=5001-5041msec 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 00:36:16.180 real 0m24.134s 00:36:16.180 user 4m32.703s 00:36:16.180 sys 0m6.631s 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 ************************************ 00:36:16.180 END TEST fio_dif_rand_params 00:36:16.180 ************************************ 00:36:16.180 11:22:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:16.180 11:22:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:16.180 11:22:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:16.180 11:22:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 ************************************ 00:36:16.180 START TEST fio_dif_digest 00:36:16.180 ************************************ 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 bdev_null0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.180 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.181 [2024-07-11 11:22:30.273327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:16.181 { 00:36:16.181 "params": { 00:36:16.181 "name": "Nvme$subsystem", 00:36:16.181 "trtype": "$TEST_TRANSPORT", 00:36:16.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.181 "adrfam": "ipv4", 00:36:16.181 "trsvcid": "$NVMF_PORT", 00:36:16.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.181 "hdgst": ${hdgst:-false}, 00:36:16.181 "ddgst": ${ddgst:-false} 00:36:16.181 }, 00:36:16.181 "method": "bdev_nvme_attach_controller" 00:36:16.181 } 00:36:16.181 EOF 00:36:16.181 )") 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:16.181 "params": { 00:36:16.181 "name": "Nvme0", 00:36:16.181 "trtype": "tcp", 00:36:16.181 "traddr": "10.0.0.2", 00:36:16.181 "adrfam": "ipv4", 00:36:16.181 "trsvcid": "4420", 00:36:16.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.181 "hdgst": true, 00:36:16.181 "ddgst": true 00:36:16.181 }, 00:36:16.181 "method": "bdev_nvme_attach_controller" 00:36:16.181 }' 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:16.181 11:22:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.181 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:16.181 ... 00:36:16.181 fio-3.35 00:36:16.181 Starting 3 threads 00:36:16.181 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.389 00:36:28.389 filename0: (groupid=0, jobs=1): err= 0: pid=422262: Thu Jul 11 11:22:41 2024 00:36:28.389 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10049msec) 00:36:28.389 slat (nsec): min=4972, max=81426, avg=18243.82, stdev=4574.06 00:36:28.389 clat (usec): min=10971, max=53299, avg=14528.10, stdev=1527.14 00:36:28.389 lat (usec): min=10986, max=53321, avg=14546.34, stdev=1527.23 00:36:28.389 clat percentiles (usec): 00:36:28.389 | 1.00th=[12387], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:36:28.389 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:36:28.389 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:36:28.389 | 99.00th=[16909], 99.50th=[17433], 99.90th=[23462], 99.95th=[49021], 00:36:28.389 | 99.99th=[53216] 00:36:28.389 bw ( KiB/s): min=25856, max=26880, per=33.78%, avg=26447.40, stdev=309.16, samples=20 00:36:28.389 iops : min= 202, max= 210, avg=206.60, stdev= 2.44, samples=20 00:36:28.389 lat (msec) : 20=99.76%, 50=0.19%, 100=0.05% 00:36:28.389 cpu : usr=95.04%, sys=4.31%, ctx=85, majf=0, minf=197 00:36:28.389 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.389 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:28.389 filename0: (groupid=0, jobs=1): err= 0: pid=422263: Thu Jul 11 11:22:41 2024 00:36:28.389 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10049msec) 00:36:28.389 slat (nsec): min=7795, max=52690, avg=17631.10, stdev=4425.31 00:36:28.389 clat (usec): min=11722, max=51755, avg=14969.33, stdev=1478.34 00:36:28.389 lat (usec): min=11742, max=51773, avg=14986.96, stdev=1478.59 00:36:28.389 clat percentiles (usec): 00:36:28.389 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:36:28.389 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:36:28.389 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:36:28.389 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19006], 99.95th=[47973], 00:36:28.389 | 99.99th=[51643] 00:36:28.389 bw ( KiB/s): min=24576, max=26624, per=32.79%, avg=25666.55, stdev=462.77, samples=20 00:36:28.389 iops : min= 192, max= 208, avg=200.50, stdev= 3.61, samples=20 00:36:28.389 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:36:28.389 cpu : usr=94.18%, sys=4.93%, ctx=68, majf=0, minf=138 00:36:28.389 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.389 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:28.389 filename0: (groupid=0, jobs=1): err= 0: pid=422264: Thu Jul 11 11:22:41 2024 00:36:28.389 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10047msec) 00:36:28.389 slat (nsec): min=4912, max=52428, avg=17776.97, stdev=5063.30 00:36:28.389 clat (usec): min=11437, max=54321, avg=14526.89, stdev=1550.76 00:36:28.389 lat (usec): min=11463, max=54341, avg=14544.67, stdev=1550.67 00:36:28.389 clat percentiles (usec): 00:36:28.389 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:36:28.389 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:36:28.389 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[16188], 00:36:28.389 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19792], 99.95th=[51643], 00:36:28.389 | 99.99th=[54264] 00:36:28.389 bw ( KiB/s): min=25344, max=27392, per=33.78%, avg=26444.80, stdev=455.68, samples=20 00:36:28.389 iops : min= 198, max= 214, avg=206.60, stdev= 3.56, samples=20 00:36:28.389 lat (msec) : 20=99.90%, 100=0.10% 00:36:28.389 cpu : usr=92.95%, sys=5.60%, ctx=224, majf=0, minf=177 00:36:28.389 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.389 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:28.389 00:36:28.389 Run status group 0 (all jobs): 00:36:28.389 READ: bw=76.5MiB/s (80.2MB/s), 25.0MiB/s-25.7MiB/s (26.2MB/s-27.0MB/s), io=768MiB (806MB), run=10047-10049msec 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.389 00:36:28.389 real 0m11.200s 00:36:28.389 user 0m29.421s 00:36:28.389 sys 0m1.780s 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:28.389 11:22:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.389 ************************************ 00:36:28.389 END TEST fio_dif_digest 00:36:28.389 ************************************ 00:36:28.389 11:22:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:28.389 11:22:41 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:28.389 11:22:41 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:28.389 rmmod nvme_tcp 00:36:28.389 rmmod nvme_fabrics 00:36:28.389 rmmod nvme_keyring 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 416339 ']' 00:36:28.389 11:22:41 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 416339 00:36:28.389 11:22:41 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 416339 ']' 00:36:28.389 11:22:41 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 416339 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 416339 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 416339' 00:36:28.390 killing process with pid 416339 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@967 -- # kill 416339 00:36:28.390 11:22:41 nvmf_dif -- common/autotest_common.sh@972 -- # wait 416339 00:36:28.390 11:22:41 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:28.390 11:22:41 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:28.649 Waiting for block devices as requested 00:36:28.649 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:28.906 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:28.906 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:28.906 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:29.164 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:29.164 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:29.164 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:29.164 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:29.421 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:29.421 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:29.421 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:29.421 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:29.678 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:29.678 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:29.678 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:29.678 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:29.936 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:29.936 11:22:44 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:29.936 11:22:44 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:29.936 11:22:44 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:29.936 11:22:44 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:29.936 11:22:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.936 11:22:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:29.936 11:22:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.474 11:22:46 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:32.474 00:36:32.474 real 1m6.626s 00:36:32.474 user 6m29.063s 00:36:32.474 sys 0m17.976s 00:36:32.474 11:22:46 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:32.474 11:22:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:32.474 ************************************ 00:36:32.474 END TEST nvmf_dif 00:36:32.474 ************************************ 00:36:32.474 11:22:46 -- common/autotest_common.sh@1142 -- # return 0 00:36:32.474 11:22:46 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:32.474 11:22:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:32.474 11:22:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:32.474 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:36:32.474 ************************************ 00:36:32.474 START TEST nvmf_abort_qd_sizes 00:36:32.474 ************************************ 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:32.474 * Looking for test storage... 00:36:32.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:32.474 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:32.475 11:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:34.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:34.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:34.375 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:34.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:34.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:34.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:36:34.376 00:36:34.376 --- 10.0.0.2 ping statistics --- 00:36:34.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.376 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:36:34.376 00:36:34.376 --- 10.0.0.1 ping statistics --- 00:36:34.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.376 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:34.376 11:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.313 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:35.313 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:35.571 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:35.571 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:35.571 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:35.571 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:35.571 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:35.571 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:35.571 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:36.509 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=427173 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 427173 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 427173 ']' 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:36.767 11:22:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.767 [2024-07-11 11:22:51.008568] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:36:36.768 [2024-07-11 11:22:51.008642] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.768 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.768 [2024-07-11 11:22:51.069659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:36.768 [2024-07-11 11:22:51.153907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.768 [2024-07-11 11:22:51.153963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.768 [2024-07-11 11:22:51.153987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.768 [2024-07-11 11:22:51.153997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.768 [2024-07-11 11:22:51.154007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.768 [2024-07-11 11:22:51.154073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.768 [2024-07-11 11:22:51.154128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:36.768 [2024-07-11 11:22:51.154195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:36.768 [2024-07-11 11:22:51.154197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:37.026 11:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.026 ************************************ 00:36:37.026 START TEST spdk_target_abort 00:36:37.026 ************************************ 00:36:37.026 11:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:37.026 11:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:37.026 11:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:37.026 11:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.026 11:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.303 spdk_targetn1 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.304 [2024-07-11 11:22:54.175180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:40.304 [2024-07-11 11:22:54.207420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.304 11:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.304 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.580 Initializing NVMe Controllers 00:36:43.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:43.580 Initialization complete. Launching workers. 00:36:43.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12856, failed: 0 00:36:43.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 11629 00:36:43.580 success 752, unsuccess 475, failed 0 00:36:43.580 11:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:43.580 11:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:43.580 EAL: No free 2048 kB hugepages reported on node 1 00:36:46.856 Initializing NVMe Controllers 00:36:46.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:46.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:46.856 Initialization complete. Launching workers. 00:36:46.856 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8744, failed: 0 00:36:46.856 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7480 00:36:46.856 success 290, unsuccess 974, failed 0 00:36:46.856 11:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:46.856 11:23:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:46.856 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.134 Initializing NVMe Controllers 00:36:50.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.134 Initialization complete. Launching workers. 00:36:50.134 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32875, failed: 0 00:36:50.134 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2724, failed to submit 30151 00:36:50.134 success 459, unsuccess 2265, failed 0 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.134 11:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 427173 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 427173 ']' 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 427173 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 427173 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 427173' 00:36:51.065 killing process with pid 427173 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 427173 00:36:51.065 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 427173 00:36:51.323 00:36:51.323 real 0m14.326s 00:36:51.323 user 0m54.300s 00:36:51.323 sys 0m2.550s 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:51.323 ************************************ 00:36:51.323 END TEST spdk_target_abort 00:36:51.323 ************************************ 00:36:51.323 11:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:51.323 11:23:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:51.323 11:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:51.323 11:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:51.323 11:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:51.323 ************************************ 00:36:51.323 START TEST kernel_target_abort 00:36:51.323 ************************************ 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:51.323 11:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:52.698 Waiting for block devices as requested 00:36:52.698 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:52.698 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:52.956 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:52.956 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:52.956 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:52.956 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:53.215 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:53.215 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:53.215 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:53.474 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:53.474 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:53.474 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:53.474 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:53.474 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:53.732 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:53.732 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:53.732 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:53.990 No valid GPT data, bailing 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:53.990 00:36:53.990 Discovery Log Number of Records 2, Generation counter 2 00:36:53.990 =====Discovery Log Entry 0====== 00:36:53.990 trtype: tcp 00:36:53.990 adrfam: ipv4 00:36:53.990 subtype: current discovery subsystem 00:36:53.990 treq: not specified, sq flow control disable supported 00:36:53.990 portid: 1 00:36:53.990 trsvcid: 4420 00:36:53.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:53.990 traddr: 10.0.0.1 00:36:53.990 eflags: none 00:36:53.990 sectype: none 00:36:53.990 =====Discovery Log Entry 1====== 00:36:53.990 trtype: tcp 00:36:53.990 adrfam: ipv4 00:36:53.990 subtype: nvme subsystem 00:36:53.990 treq: not specified, sq flow control disable supported 00:36:53.990 portid: 1 00:36:53.990 trsvcid: 4420 00:36:53.990 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:53.990 traddr: 10.0.0.1 00:36:53.990 eflags: none 00:36:53.990 sectype: none 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.990 11:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.990 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.327 Initializing NVMe Controllers 00:36:57.327 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.327 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.327 Initialization complete. Launching workers. 00:36:57.327 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55506, failed: 0 00:36:57.327 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55506, failed to submit 0 00:36:57.327 success 0, unsuccess 55506, failed 0 00:36:57.327 11:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:57.327 11:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:57.327 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.611 Initializing NVMe Controllers 00:37:00.611 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.611 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.611 Initialization complete. Launching workers. 00:37:00.611 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100454, failed: 0 00:37:00.611 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25294, failed to submit 75160 00:37:00.611 success 0, unsuccess 25294, failed 0 00:37:00.611 11:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:00.611 11:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:00.611 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.887 Initializing NVMe Controllers 00:37:03.887 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:03.887 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:03.887 Initialization complete. Launching workers. 00:37:03.887 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96864, failed: 0 00:37:03.887 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24206, failed to submit 72658 00:37:03.887 success 0, unsuccess 24206, failed 0 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:03.887 11:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:04.453 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:04.453 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:04.453 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:04.711 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:05.647 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:05.647 00:37:05.647 real 0m14.191s 00:37:05.647 user 0m6.586s 00:37:05.647 sys 0m3.106s 00:37:05.647 11:23:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:05.647 11:23:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.647 ************************************ 00:37:05.647 END TEST kernel_target_abort 00:37:05.647 ************************************ 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:05.647 rmmod nvme_tcp 00:37:05.647 rmmod nvme_fabrics 00:37:05.647 rmmod nvme_keyring 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 427173 ']' 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 427173 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 427173 ']' 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 427173 00:37:05.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (427173) - No such process 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 427173 is not found' 00:37:05.647 Process with pid 427173 is not found 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:05.647 11:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.020 Waiting for block devices as requested 00:37:07.020 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:07.020 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:07.020 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:07.279 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:07.279 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:07.279 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:07.279 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:07.538 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:07.538 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:07.538 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:07.538 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:07.797 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:07.797 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:07.797 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:08.056 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:08.056 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:08.056 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.317 11:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.222 11:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:10.222 00:37:10.222 real 0m38.184s 00:37:10.222 user 1m3.040s 00:37:10.222 sys 0m9.167s 00:37:10.222 11:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:10.222 11:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:10.222 ************************************ 00:37:10.222 END TEST nvmf_abort_qd_sizes 00:37:10.222 ************************************ 00:37:10.222 11:23:24 -- common/autotest_common.sh@1142 -- # return 0 00:37:10.222 11:23:24 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:10.222 11:23:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:10.222 11:23:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:10.222 11:23:24 -- common/autotest_common.sh@10 -- # set +x 00:37:10.222 ************************************ 00:37:10.222 START TEST keyring_file 00:37:10.222 ************************************ 00:37:10.222 11:23:24 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:10.480 * Looking for test storage... 00:37:10.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:10.480 11:23:24 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:10.480 11:23:24 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.480 11:23:24 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.480 11:23:24 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.480 11:23:24 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.481 11:23:24 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.481 11:23:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.481 11:23:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.481 11:23:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.481 11:23:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:10.481 11:23:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jmKYODzCRs 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jmKYODzCRs 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jmKYODzCRs 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jmKYODzCRs 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.esZli40IQ0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.481 11:23:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.esZli40IQ0 00:37:10.481 11:23:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.esZli40IQ0 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.esZli40IQ0 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=432915 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:10.481 11:23:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 432915 00:37:10.481 11:23:24 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 432915 ']' 00:37:10.481 11:23:24 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.481 11:23:24 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:10.481 11:23:24 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.481 11:23:24 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:10.481 11:23:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:10.481 [2024-07-11 11:23:24.822923] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:37:10.481 [2024-07-11 11:23:24.823021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432915 ] 00:37:10.481 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.481 [2024-07-11 11:23:24.880093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.739 [2024-07-11 11:23:24.961969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:10.997 11:23:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:10.997 [2024-07-11 11:23:25.204191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.997 null0 00:37:10.997 [2024-07-11 11:23:25.236238] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:10.997 [2024-07-11 11:23:25.236667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:10.997 [2024-07-11 11:23:25.244251] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:10.997 11:23:25 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:10.997 [2024-07-11 11:23:25.256268] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:10.997 request: 00:37:10.997 { 00:37:10.997 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.997 "secure_channel": false, 00:37:10.997 "listen_address": { 00:37:10.997 "trtype": "tcp", 00:37:10.997 "traddr": "127.0.0.1", 00:37:10.997 "trsvcid": "4420" 00:37:10.997 }, 00:37:10.997 "method": "nvmf_subsystem_add_listener", 00:37:10.997 "req_id": 1 00:37:10.997 } 00:37:10.997 Got JSON-RPC error response 00:37:10.997 response: 00:37:10.997 { 00:37:10.997 "code": -32602, 00:37:10.997 "message": "Invalid parameters" 00:37:10.997 } 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:10.997 11:23:25 keyring_file -- keyring/file.sh@46 -- # bperfpid=432934 00:37:10.997 11:23:25 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:10.997 11:23:25 keyring_file -- keyring/file.sh@48 -- # waitforlisten 432934 /var/tmp/bperf.sock 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 432934 ']' 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:10.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:10.997 11:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:10.997 [2024-07-11 11:23:25.301799] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:37:10.997 [2024-07-11 11:23:25.301887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432934 ] 00:37:10.997 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.997 [2024-07-11 11:23:25.359398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.254 [2024-07-11 11:23:25.448336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.254 11:23:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:11.254 11:23:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:11.254 11:23:25 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:11.255 11:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:11.512 11:23:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.esZli40IQ0 00:37:11.512 11:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.esZli40IQ0 00:37:11.769 11:23:26 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:11.769 11:23:26 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:11.769 11:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.769 11:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.769 11:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.026 11:23:26 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.jmKYODzCRs == \/\t\m\p\/\t\m\p\.\j\m\K\Y\O\D\z\C\R\s ]] 00:37:12.026 11:23:26 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:12.026 11:23:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:12.026 11:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.026 11:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.026 11:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:12.282 11:23:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.esZli40IQ0 == \/\t\m\p\/\t\m\p\.\e\s\Z\l\i\4\0\I\Q\0 ]] 00:37:12.282 11:23:26 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:12.282 11:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.282 11:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.282 11:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.282 11:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.282 11:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.538 11:23:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:12.538 11:23:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:12.538 11:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:12.538 11:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.538 11:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.538 11:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:12.538 11:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.796 11:23:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:12.796 11:23:27 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.796 11:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:13.053 [2024-07-11 11:23:27.263450] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:13.053 nvme0n1 00:37:13.053 11:23:27 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:13.053 11:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:13.053 11:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.053 11:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.053 11:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.053 11:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.310 11:23:27 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:13.310 11:23:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:13.310 11:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:13.310 11:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:13.310 11:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.310 11:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.310 11:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:13.568 11:23:27 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:13.568 11:23:27 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.568 Running I/O for 1 seconds... 00:37:14.940 00:37:14.940 Latency(us) 00:37:14.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.940 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:14.940 nvme0n1 : 1.01 10221.10 39.93 0.00 0.00 12474.92 3689.43 17864.63 00:37:14.940 =================================================================================================================== 00:37:14.940 Total : 10221.10 39.93 0.00 0.00 12474.92 3689.43 17864.63 00:37:14.940 0 00:37:14.940 11:23:28 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:14.940 11:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:14.940 11:23:29 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:14.940 11:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:14.940 11:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.940 11:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.940 11:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.940 11:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.197 11:23:29 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:15.197 11:23:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:15.197 11:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:15.197 11:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.197 11:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.197 11:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.197 11:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:15.455 11:23:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:15.455 11:23:29 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.455 11:23:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.455 11:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:15.713 [2024-07-11 11:23:29.964823] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:15.713 [2024-07-11 11:23:29.964890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed1710 (107): Transport endpoint is not connected 00:37:15.713 [2024-07-11 11:23:29.965881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed1710 (9): Bad file descriptor 00:37:15.713 [2024-07-11 11:23:29.966880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:15.713 [2024-07-11 11:23:29.966902] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:15.713 [2024-07-11 11:23:29.966917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:15.713 request: 00:37:15.713 { 00:37:15.713 "name": "nvme0", 00:37:15.713 "trtype": "tcp", 00:37:15.713 "traddr": "127.0.0.1", 00:37:15.713 "adrfam": "ipv4", 00:37:15.713 "trsvcid": "4420", 00:37:15.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.713 "prchk_reftag": false, 00:37:15.713 "prchk_guard": false, 00:37:15.713 "hdgst": false, 00:37:15.713 "ddgst": false, 00:37:15.713 "psk": "key1", 00:37:15.713 "method": "bdev_nvme_attach_controller", 00:37:15.713 "req_id": 1 00:37:15.713 } 00:37:15.713 Got JSON-RPC error response 00:37:15.713 response: 00:37:15.713 { 00:37:15.713 "code": -5, 00:37:15.713 "message": "Input/output error" 00:37:15.713 } 00:37:15.713 11:23:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:15.713 11:23:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.713 11:23:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.713 11:23:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.713 11:23:29 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:15.713 11:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:15.713 11:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.713 11:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.713 11:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.713 11:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:15.971 11:23:30 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:15.971 11:23:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:15.971 11:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:15.971 11:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:15.971 11:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:15.971 11:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.971 11:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.229 11:23:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:16.229 11:23:30 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:16.229 11:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:16.487 11:23:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:16.487 11:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:16.744 11:23:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:16.744 11:23:30 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:16.744 11:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.002 11:23:31 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:17.002 11:23:31 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.jmKYODzCRs 00:37:17.002 11:23:31 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.002 11:23:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:17.002 11:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:17.260 [2024-07-11 11:23:31.467916] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jmKYODzCRs': 0100660 00:37:17.260 [2024-07-11 11:23:31.467953] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:17.260 request: 00:37:17.260 { 00:37:17.260 "name": "key0", 00:37:17.260 "path": "/tmp/tmp.jmKYODzCRs", 00:37:17.260 "method": "keyring_file_add_key", 00:37:17.260 "req_id": 1 00:37:17.260 } 00:37:17.260 Got JSON-RPC error response 00:37:17.260 response: 00:37:17.260 { 00:37:17.260 "code": -1, 00:37:17.260 "message": "Operation not permitted" 00:37:17.260 } 00:37:17.260 11:23:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:17.260 11:23:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:17.260 11:23:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:17.260 11:23:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:17.260 11:23:31 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.jmKYODzCRs 00:37:17.260 11:23:31 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:17.260 11:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jmKYODzCRs 00:37:17.518 11:23:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.jmKYODzCRs 00:37:17.518 11:23:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:17.518 11:23:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.518 11:23:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.518 11:23:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.518 11:23:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.518 11:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.776 11:23:31 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:17.776 11:23:31 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:17.776 11:23:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:17.776 11:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.035 [2024-07-11 11:23:32.221992] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jmKYODzCRs': No such file or directory 00:37:18.035 [2024-07-11 11:23:32.222053] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:18.035 [2024-07-11 11:23:32.222080] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:18.035 [2024-07-11 11:23:32.222091] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:18.035 [2024-07-11 11:23:32.222102] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:18.035 request: 00:37:18.035 { 00:37:18.035 "name": "nvme0", 00:37:18.035 "trtype": "tcp", 00:37:18.035 "traddr": "127.0.0.1", 00:37:18.035 "adrfam": "ipv4", 00:37:18.035 "trsvcid": "4420", 00:37:18.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:18.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:18.035 "prchk_reftag": false, 00:37:18.035 "prchk_guard": false, 00:37:18.035 "hdgst": false, 00:37:18.035 "ddgst": false, 00:37:18.035 "psk": "key0", 00:37:18.035 "method": "bdev_nvme_attach_controller", 00:37:18.035 "req_id": 1 00:37:18.035 } 00:37:18.035 Got JSON-RPC error response 00:37:18.035 response: 00:37:18.035 { 00:37:18.035 "code": -19, 00:37:18.035 "message": "No such device" 00:37:18.035 } 00:37:18.035 11:23:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:18.035 11:23:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:18.035 11:23:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:18.035 11:23:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:18.035 11:23:32 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:18.035 11:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:18.293 11:23:32 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XMRUt9EPrO 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:18.293 11:23:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:18.293 11:23:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:18.293 11:23:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:18.293 11:23:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:18.293 11:23:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:18.293 11:23:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XMRUt9EPrO 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XMRUt9EPrO 00:37:18.293 11:23:32 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.XMRUt9EPrO 00:37:18.293 11:23:32 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XMRUt9EPrO 00:37:18.293 11:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XMRUt9EPrO 00:37:18.551 11:23:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.551 11:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.808 nvme0n1 00:37:18.808 11:23:33 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:18.808 11:23:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.808 11:23:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.808 11:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.808 11:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.808 11:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.065 11:23:33 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:19.065 11:23:33 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:19.065 11:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:19.322 11:23:33 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:19.322 11:23:33 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:19.322 11:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.322 11:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.322 11:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.605 11:23:33 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:19.605 11:23:33 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:19.605 11:23:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:19.605 11:23:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.605 11:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.605 11:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.605 11:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.864 11:23:34 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:19.864 11:23:34 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:19.864 11:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:20.122 11:23:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:20.122 11:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.122 11:23:34 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:20.380 11:23:34 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:20.381 11:23:34 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XMRUt9EPrO 00:37:20.381 11:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XMRUt9EPrO 00:37:20.638 11:23:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.esZli40IQ0 00:37:20.638 11:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.esZli40IQ0 00:37:20.896 11:23:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:20.896 11:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:21.155 nvme0n1 00:37:21.155 11:23:35 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:21.155 11:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:21.414 11:23:35 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:21.414 "subsystems": [ 00:37:21.414 { 00:37:21.414 "subsystem": "keyring", 00:37:21.414 "config": [ 00:37:21.414 { 00:37:21.414 "method": "keyring_file_add_key", 00:37:21.414 "params": { 00:37:21.414 "name": "key0", 00:37:21.414 "path": "/tmp/tmp.XMRUt9EPrO" 00:37:21.414 } 00:37:21.414 }, 00:37:21.414 { 00:37:21.414 "method": "keyring_file_add_key", 00:37:21.414 "params": { 00:37:21.414 "name": "key1", 00:37:21.414 "path": "/tmp/tmp.esZli40IQ0" 00:37:21.414 } 00:37:21.414 } 00:37:21.414 ] 00:37:21.414 }, 00:37:21.414 { 00:37:21.414 "subsystem": "iobuf", 00:37:21.414 "config": [ 00:37:21.414 { 00:37:21.414 "method": "iobuf_set_options", 00:37:21.414 "params": { 00:37:21.414 "small_pool_count": 8192, 00:37:21.414 "large_pool_count": 1024, 00:37:21.414 "small_bufsize": 8192, 00:37:21.414 "large_bufsize": 135168 00:37:21.414 } 00:37:21.414 } 00:37:21.414 ] 00:37:21.414 }, 00:37:21.414 { 00:37:21.414 "subsystem": "sock", 00:37:21.414 "config": [ 00:37:21.414 { 00:37:21.414 "method": "sock_set_default_impl", 00:37:21.414 "params": { 00:37:21.414 "impl_name": "posix" 00:37:21.414 } 00:37:21.414 }, 00:37:21.414 { 00:37:21.414 "method": "sock_impl_set_options", 00:37:21.414 "params": { 00:37:21.414 "impl_name": "ssl", 00:37:21.414 "recv_buf_size": 4096, 00:37:21.414 "send_buf_size": 4096, 00:37:21.414 "enable_recv_pipe": true, 00:37:21.414 "enable_quickack": false, 00:37:21.414 "enable_placement_id": 0, 00:37:21.414 "enable_zerocopy_send_server": true, 00:37:21.414 "enable_zerocopy_send_client": false, 00:37:21.414 "zerocopy_threshold": 0, 00:37:21.414 "tls_version": 0, 00:37:21.414 "enable_ktls": false 00:37:21.414 } 00:37:21.414 }, 00:37:21.414 { 00:37:21.414 "method": "sock_impl_set_options", 00:37:21.415 "params": { 00:37:21.415 "impl_name": "posix", 00:37:21.415 "recv_buf_size": 2097152, 00:37:21.415 "send_buf_size": 2097152, 00:37:21.415 "enable_recv_pipe": true, 00:37:21.415 "enable_quickack": false, 00:37:21.415 "enable_placement_id": 0, 00:37:21.415 "enable_zerocopy_send_server": true, 00:37:21.415 "enable_zerocopy_send_client": false, 00:37:21.415 "zerocopy_threshold": 0, 00:37:21.415 "tls_version": 0, 00:37:21.415 "enable_ktls": false 00:37:21.415 } 00:37:21.415 } 00:37:21.415 ] 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "subsystem": "vmd", 00:37:21.415 "config": [] 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "subsystem": "accel", 00:37:21.415 "config": [ 00:37:21.415 { 00:37:21.415 "method": "accel_set_options", 00:37:21.415 "params": { 00:37:21.415 "small_cache_size": 128, 00:37:21.415 "large_cache_size": 16, 00:37:21.415 "task_count": 2048, 00:37:21.415 "sequence_count": 2048, 00:37:21.415 "buf_count": 2048 00:37:21.415 } 00:37:21.415 } 00:37:21.415 ] 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "subsystem": "bdev", 00:37:21.415 "config": [ 00:37:21.415 { 00:37:21.415 "method": "bdev_set_options", 00:37:21.415 "params": { 00:37:21.415 "bdev_io_pool_size": 65535, 00:37:21.415 "bdev_io_cache_size": 256, 00:37:21.415 "bdev_auto_examine": true, 00:37:21.415 "iobuf_small_cache_size": 128, 00:37:21.415 "iobuf_large_cache_size": 16 00:37:21.415 } 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "method": "bdev_raid_set_options", 00:37:21.415 "params": { 00:37:21.415 "process_window_size_kb": 1024 00:37:21.415 } 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "method": "bdev_iscsi_set_options", 00:37:21.415 "params": { 00:37:21.415 "timeout_sec": 30 00:37:21.415 } 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "method": "bdev_nvme_set_options", 00:37:21.415 "params": { 00:37:21.415 "action_on_timeout": "none", 00:37:21.415 "timeout_us": 0, 00:37:21.415 "timeout_admin_us": 0, 00:37:21.415 "keep_alive_timeout_ms": 10000, 00:37:21.415 "arbitration_burst": 0, 00:37:21.415 "low_priority_weight": 0, 00:37:21.415 "medium_priority_weight": 0, 00:37:21.415 "high_priority_weight": 0, 00:37:21.415 "nvme_adminq_poll_period_us": 10000, 00:37:21.415 "nvme_ioq_poll_period_us": 0, 00:37:21.415 "io_queue_requests": 512, 00:37:21.415 "delay_cmd_submit": true, 00:37:21.415 "transport_retry_count": 4, 00:37:21.415 "bdev_retry_count": 3, 00:37:21.415 "transport_ack_timeout": 0, 00:37:21.415 "ctrlr_loss_timeout_sec": 0, 00:37:21.415 "reconnect_delay_sec": 0, 00:37:21.415 "fast_io_fail_timeout_sec": 0, 00:37:21.415 "disable_auto_failback": false, 00:37:21.415 "generate_uuids": false, 00:37:21.415 "transport_tos": 0, 00:37:21.415 "nvme_error_stat": false, 00:37:21.415 "rdma_srq_size": 0, 00:37:21.415 "io_path_stat": false, 00:37:21.415 "allow_accel_sequence": false, 00:37:21.415 "rdma_max_cq_size": 0, 00:37:21.415 "rdma_cm_event_timeout_ms": 0, 00:37:21.415 "dhchap_digests": [ 00:37:21.415 "sha256", 00:37:21.415 "sha384", 00:37:21.415 "sha512" 00:37:21.415 ], 00:37:21.415 "dhchap_dhgroups": [ 00:37:21.415 "null", 00:37:21.415 "ffdhe2048", 00:37:21.415 "ffdhe3072", 00:37:21.415 "ffdhe4096", 00:37:21.415 "ffdhe6144", 00:37:21.415 "ffdhe8192" 00:37:21.415 ] 00:37:21.415 } 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "method": "bdev_nvme_attach_controller", 00:37:21.415 "params": { 00:37:21.415 "name": "nvme0", 00:37:21.415 "trtype": "TCP", 00:37:21.415 "adrfam": "IPv4", 00:37:21.415 "traddr": "127.0.0.1", 00:37:21.415 "trsvcid": "4420", 00:37:21.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.415 "prchk_reftag": false, 00:37:21.415 "prchk_guard": false, 00:37:21.415 "ctrlr_loss_timeout_sec": 0, 00:37:21.415 "reconnect_delay_sec": 0, 00:37:21.415 "fast_io_fail_timeout_sec": 0, 00:37:21.415 "psk": "key0", 00:37:21.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.415 "hdgst": false, 00:37:21.415 "ddgst": false 00:37:21.415 } 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "method": "bdev_nvme_set_hotplug", 00:37:21.415 "params": { 00:37:21.415 "period_us": 100000, 00:37:21.415 "enable": false 00:37:21.415 } 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "method": "bdev_wait_for_examine" 00:37:21.415 } 00:37:21.415 ] 00:37:21.415 }, 00:37:21.415 { 00:37:21.415 "subsystem": "nbd", 00:37:21.415 "config": [] 00:37:21.415 } 00:37:21.415 ] 00:37:21.415 }' 00:37:21.415 11:23:35 keyring_file -- keyring/file.sh@114 -- # killprocess 432934 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 432934 ']' 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 432934 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432934 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432934' 00:37:21.415 killing process with pid 432934 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@967 -- # kill 432934 00:37:21.415 Received shutdown signal, test time was about 1.000000 seconds 00:37:21.415 00:37:21.415 Latency(us) 00:37:21.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.415 =================================================================================================================== 00:37:21.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.415 11:23:35 keyring_file -- common/autotest_common.sh@972 -- # wait 432934 00:37:21.674 11:23:35 keyring_file -- keyring/file.sh@117 -- # bperfpid=434381 00:37:21.674 11:23:35 keyring_file -- keyring/file.sh@119 -- # waitforlisten 434381 /var/tmp/bperf.sock 00:37:21.674 11:23:35 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 434381 ']' 00:37:21.674 11:23:35 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.674 11:23:35 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:21.674 11:23:35 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:21.674 11:23:35 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.674 11:23:35 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:21.674 "subsystems": [ 00:37:21.674 { 00:37:21.674 "subsystem": "keyring", 00:37:21.674 "config": [ 00:37:21.674 { 00:37:21.674 "method": "keyring_file_add_key", 00:37:21.674 "params": { 00:37:21.674 "name": "key0", 00:37:21.674 "path": "/tmp/tmp.XMRUt9EPrO" 00:37:21.674 } 00:37:21.674 }, 00:37:21.674 { 00:37:21.674 "method": "keyring_file_add_key", 00:37:21.674 "params": { 00:37:21.674 "name": "key1", 00:37:21.674 "path": "/tmp/tmp.esZli40IQ0" 00:37:21.674 } 00:37:21.674 } 00:37:21.674 ] 00:37:21.674 }, 00:37:21.674 { 00:37:21.674 "subsystem": "iobuf", 00:37:21.674 "config": [ 00:37:21.674 { 00:37:21.674 "method": "iobuf_set_options", 00:37:21.674 "params": { 00:37:21.674 "small_pool_count": 8192, 00:37:21.674 "large_pool_count": 1024, 00:37:21.674 "small_bufsize": 8192, 00:37:21.674 "large_bufsize": 135168 00:37:21.674 } 00:37:21.674 } 00:37:21.674 ] 00:37:21.674 }, 00:37:21.674 { 00:37:21.674 "subsystem": "sock", 00:37:21.674 "config": [ 00:37:21.674 { 00:37:21.674 "method": "sock_set_default_impl", 00:37:21.674 "params": { 00:37:21.674 "impl_name": "posix" 00:37:21.674 } 00:37:21.674 }, 00:37:21.674 { 00:37:21.674 "method": "sock_impl_set_options", 00:37:21.674 "params": { 00:37:21.674 "impl_name": "ssl", 00:37:21.674 "recv_buf_size": 4096, 00:37:21.674 "send_buf_size": 4096, 00:37:21.674 "enable_recv_pipe": true, 00:37:21.674 "enable_quickack": false, 00:37:21.674 "enable_placement_id": 0, 00:37:21.674 "enable_zerocopy_send_server": true, 00:37:21.674 "enable_zerocopy_send_client": false, 00:37:21.674 "zerocopy_threshold": 0, 00:37:21.675 "tls_version": 0, 00:37:21.675 "enable_ktls": false 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "sock_impl_set_options", 00:37:21.675 "params": { 00:37:21.675 "impl_name": "posix", 00:37:21.675 "recv_buf_size": 2097152, 00:37:21.675 "send_buf_size": 2097152, 00:37:21.675 "enable_recv_pipe": true, 00:37:21.675 "enable_quickack": false, 00:37:21.675 "enable_placement_id": 0, 00:37:21.675 "enable_zerocopy_send_server": true, 00:37:21.675 "enable_zerocopy_send_client": false, 00:37:21.675 "zerocopy_threshold": 0, 00:37:21.675 "tls_version": 0, 00:37:21.675 "enable_ktls": false 00:37:21.675 } 00:37:21.675 } 00:37:21.675 ] 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "subsystem": "vmd", 00:37:21.675 "config": [] 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "subsystem": "accel", 00:37:21.675 "config": [ 00:37:21.675 { 00:37:21.675 "method": "accel_set_options", 00:37:21.675 "params": { 00:37:21.675 "small_cache_size": 128, 00:37:21.675 "large_cache_size": 16, 00:37:21.675 "task_count": 2048, 00:37:21.675 "sequence_count": 2048, 00:37:21.675 "buf_count": 2048 00:37:21.675 } 00:37:21.675 } 00:37:21.675 ] 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "subsystem": "bdev", 00:37:21.675 "config": [ 00:37:21.675 { 00:37:21.675 "method": "bdev_set_options", 00:37:21.675 "params": { 00:37:21.675 "bdev_io_pool_size": 65535, 00:37:21.675 "bdev_io_cache_size": 256, 00:37:21.675 "bdev_auto_examine": true, 00:37:21.675 "iobuf_small_cache_size": 128, 00:37:21.675 "iobuf_large_cache_size": 16 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "bdev_raid_set_options", 00:37:21.675 "params": { 00:37:21.675 "process_window_size_kb": 1024 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "bdev_iscsi_set_options", 00:37:21.675 "params": { 00:37:21.675 "timeout_sec": 30 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "bdev_nvme_set_options", 00:37:21.675 "params": { 00:37:21.675 "action_on_timeout": "none", 00:37:21.675 "timeout_us": 0, 00:37:21.675 "timeout_admin_us": 0, 00:37:21.675 "keep_alive_timeout_ms": 10000, 00:37:21.675 "arbitration_burst": 0, 00:37:21.675 "low_priority_weight": 0, 00:37:21.675 "medium_priority_weight": 0, 00:37:21.675 "high_priority_weight": 0, 00:37:21.675 "nvme_adminq_poll_period_us": 10000, 00:37:21.675 "nvme_ioq_poll_period_us": 0, 00:37:21.675 "io_queue_requests": 512, 00:37:21.675 "delay_cmd_submit": true, 00:37:21.675 "transport_retry_count": 4, 00:37:21.675 "bdev_retry_count": 3, 00:37:21.675 "transport_ack_timeout": 0, 00:37:21.675 "ctrlr_loss_timeout_sec": 0, 00:37:21.675 "reconnect_delay_sec": 0, 00:37:21.675 "fast_io_fail_timeout_sec": 0, 00:37:21.675 "disable_auto_failback": false, 00:37:21.675 "generate_uuids": false, 00:37:21.675 "transport_tos": 0, 00:37:21.675 "nvme_error_stat": false, 00:37:21.675 "rdma_srq_size": 0, 00:37:21.675 "io_path_stat": false, 00:37:21.675 "allow_accel_sequence": false, 00:37:21.675 "rdma_max_cq_size": 0, 00:37:21.675 "rdma_cm_event_timeout_ms": 0, 00:37:21.675 "dhchap_digests": [ 00:37:21.675 "sha256", 00:37:21.675 "sha384", 00:37:21.675 "sha512" 00:37:21.675 ], 00:37:21.675 "dhchap_dhgroups": [ 00:37:21.675 "null", 00:37:21.675 "ffdhe2048", 00:37:21.675 "ffdhe3072", 00:37:21.675 "ffdhe4096", 00:37:21.675 "ffdhe6144", 00:37:21.675 "ffdhe8192" 00:37:21.675 ] 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "bdev_nvme_attach_controller", 00:37:21.675 "params": { 00:37:21.675 "name": "nvme0", 00:37:21.675 "trtype": "TCP", 00:37:21.675 "adrfam": "IPv4", 00:37:21.675 "traddr": "127.0.0.1", 00:37:21.675 "trsvcid": "4420", 00:37:21.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.675 "prchk_reftag": false, 00:37:21.675 "prchk_guard": false, 00:37:21.675 "ctrlr_loss_timeout_sec": 0, 00:37:21.675 "reconnect_delay_sec": 0, 00:37:21.675 "fast_io_fail_timeout_sec": 0, 00:37:21.675 "psk": "key0", 00:37:21.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.675 "hdgst": false, 00:37:21.675 "ddgst": false 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "bdev_nvme_set_hotplug", 00:37:21.675 "params": { 00:37:21.675 "period_us": 100000, 00:37:21.675 "enable": false 00:37:21.675 } 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "method": "bdev_wait_for_examine" 00:37:21.675 } 00:37:21.675 ] 00:37:21.675 }, 00:37:21.675 { 00:37:21.675 "subsystem": "nbd", 00:37:21.675 "config": [] 00:37:21.675 } 00:37:21.675 ] 00:37:21.675 }' 00:37:21.675 11:23:35 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:21.675 11:23:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.675 [2024-07-11 11:23:35.997911] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:37:21.675 [2024-07-11 11:23:35.998007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434381 ] 00:37:21.675 EAL: No free 2048 kB hugepages reported on node 1 00:37:21.675 [2024-07-11 11:23:36.054389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.934 [2024-07-11 11:23:36.135985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.934 [2024-07-11 11:23:36.319548] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:22.869 11:23:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:22.869 11:23:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:22.869 11:23:36 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:22.869 11:23:36 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:22.869 11:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.869 11:23:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:22.869 11:23:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:22.869 11:23:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:22.869 11:23:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:22.869 11:23:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.869 11:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.869 11:23:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.127 11:23:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:23.127 11:23:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:23.127 11:23:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:23.127 11:23:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.127 11:23:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.127 11:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.127 11:23:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.385 11:23:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:23.385 11:23:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:23.385 11:23:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:23.385 11:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:23.641 11:23:37 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:23.641 11:23:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:23.641 11:23:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XMRUt9EPrO /tmp/tmp.esZli40IQ0 00:37:23.641 11:23:37 keyring_file -- keyring/file.sh@20 -- # killprocess 434381 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 434381 ']' 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@952 -- # kill -0 434381 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 434381 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 434381' 00:37:23.641 killing process with pid 434381 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@967 -- # kill 434381 00:37:23.641 Received shutdown signal, test time was about 1.000000 seconds 00:37:23.641 00:37:23.641 Latency(us) 00:37:23.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.641 =================================================================================================================== 00:37:23.641 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:23.641 11:23:37 keyring_file -- common/autotest_common.sh@972 -- # wait 434381 00:37:23.898 11:23:38 keyring_file -- keyring/file.sh@21 -- # killprocess 432915 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 432915 ']' 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 432915 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432915 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432915' 00:37:23.898 killing process with pid 432915 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@967 -- # kill 432915 00:37:23.898 [2024-07-11 11:23:38.214423] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:23.898 11:23:38 keyring_file -- common/autotest_common.sh@972 -- # wait 432915 00:37:24.464 00:37:24.464 real 0m14.019s 00:37:24.464 user 0m35.177s 00:37:24.464 sys 0m3.302s 00:37:24.464 11:23:38 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:24.464 11:23:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:24.464 ************************************ 00:37:24.464 END TEST keyring_file 00:37:24.464 ************************************ 00:37:24.464 11:23:38 -- common/autotest_common.sh@1142 -- # return 0 00:37:24.464 11:23:38 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:24.464 11:23:38 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:24.464 11:23:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:24.464 11:23:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:24.464 11:23:38 -- common/autotest_common.sh@10 -- # set +x 00:37:24.464 ************************************ 00:37:24.464 START TEST keyring_linux 00:37:24.464 ************************************ 00:37:24.464 11:23:38 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:24.464 * Looking for test storage... 00:37:24.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:24.464 11:23:38 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.464 11:23:38 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.464 11:23:38 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.464 11:23:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.464 11:23:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.464 11:23:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.464 11:23:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:24.464 11:23:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:24.464 /tmp/:spdk-test:key0 00:37:24.464 11:23:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:24.464 11:23:38 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:24.464 11:23:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:24.465 /tmp/:spdk-test:key1 00:37:24.465 11:23:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=434748 00:37:24.465 11:23:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:24.465 11:23:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 434748 00:37:24.465 11:23:38 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 434748 ']' 00:37:24.465 11:23:38 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.465 11:23:38 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:24.465 11:23:38 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.465 11:23:38 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:24.465 11:23:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:24.465 [2024-07-11 11:23:38.858906] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:37:24.465 [2024-07-11 11:23:38.858988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434748 ] 00:37:24.722 EAL: No free 2048 kB hugepages reported on node 1 00:37:24.722 [2024-07-11 11:23:38.918524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.722 [2024-07-11 11:23:39.006037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.980 11:23:39 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:24.980 11:23:39 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:24.980 11:23:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:24.981 [2024-07-11 11:23:39.248546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.981 null0 00:37:24.981 [2024-07-11 11:23:39.280598] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:24.981 [2024-07-11 11:23:39.281084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.981 11:23:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:24.981 611448550 00:37:24.981 11:23:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:24.981 408565495 00:37:24.981 11:23:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=434872 00:37:24.981 11:23:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 434872 /var/tmp/bperf.sock 00:37:24.981 11:23:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 434872 ']' 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:24.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:24.981 11:23:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:24.981 [2024-07-11 11:23:39.354492] Starting SPDK v24.09-pre git sha1 e64f085ad / DPDK 22.11.4 initialization... 00:37:24.981 [2024-07-11 11:23:39.354567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434872 ] 00:37:24.981 EAL: No free 2048 kB hugepages reported on node 1 00:37:25.238 [2024-07-11 11:23:39.412105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.238 [2024-07-11 11:23:39.501829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.238 11:23:39 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:25.238 11:23:39 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:25.238 11:23:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:25.238 11:23:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:25.496 11:23:39 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:25.496 11:23:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:25.752 11:23:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:25.753 11:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:26.009 [2024-07-11 11:23:40.372160] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:26.267 nvme0n1 00:37:26.267 11:23:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:26.267 11:23:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:26.267 11:23:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:26.267 11:23:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:26.267 11:23:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:26.267 11:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.524 11:23:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:26.524 11:23:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:26.524 11:23:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:26.525 11:23:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:26.525 11:23:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.525 11:23:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:26.525 11:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@25 -- # sn=611448550 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 611448550 == \6\1\1\4\4\8\5\5\0 ]] 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 611448550 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:26.782 11:23:40 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.782 Running I/O for 1 seconds... 00:37:27.715 00:37:27.715 Latency(us) 00:37:27.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.715 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:27.715 nvme0n1 : 1.01 10737.49 41.94 0.00 0.00 11841.85 3956.43 16019.91 00:37:27.715 =================================================================================================================== 00:37:27.715 Total : 10737.49 41.94 0.00 0.00 11841.85 3956.43 16019.91 00:37:27.715 0 00:37:27.715 11:23:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:27.715 11:23:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:27.972 11:23:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:27.972 11:23:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:27.972 11:23:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:27.972 11:23:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:27.972 11:23:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.972 11:23:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:28.229 11:23:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:28.229 11:23:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:28.229 11:23:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:28.229 11:23:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.229 11:23:42 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.229 11:23:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:28.487 [2024-07-11 11:23:42.826298] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:28.487 [2024-07-11 11:23:42.826968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x769680 (107): Transport endpoint is not connected 00:37:28.487 [2024-07-11 11:23:42.827957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x769680 (9): Bad file descriptor 00:37:28.487 [2024-07-11 11:23:42.828957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:28.487 [2024-07-11 11:23:42.828977] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:28.487 [2024-07-11 11:23:42.828990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:28.487 request: 00:37:28.487 { 00:37:28.487 "name": "nvme0", 00:37:28.487 "trtype": "tcp", 00:37:28.487 "traddr": "127.0.0.1", 00:37:28.487 "adrfam": "ipv4", 00:37:28.487 "trsvcid": "4420", 00:37:28.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.487 "prchk_reftag": false, 00:37:28.487 "prchk_guard": false, 00:37:28.487 "hdgst": false, 00:37:28.487 "ddgst": false, 00:37:28.487 "psk": ":spdk-test:key1", 00:37:28.487 "method": "bdev_nvme_attach_controller", 00:37:28.487 "req_id": 1 00:37:28.487 } 00:37:28.487 Got JSON-RPC error response 00:37:28.487 response: 00:37:28.487 { 00:37:28.487 "code": -5, 00:37:28.487 "message": "Input/output error" 00:37:28.487 } 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@33 -- # sn=611448550 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 611448550 00:37:28.487 1 links removed 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@33 -- # sn=408565495 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 408565495 00:37:28.487 1 links removed 00:37:28.487 11:23:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 434872 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 434872 ']' 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 434872 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 434872 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 434872' 00:37:28.487 killing process with pid 434872 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@967 -- # kill 434872 00:37:28.487 Received shutdown signal, test time was about 1.000000 seconds 00:37:28.487 00:37:28.487 Latency(us) 00:37:28.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.487 =================================================================================================================== 00:37:28.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.487 11:23:42 keyring_linux -- common/autotest_common.sh@972 -- # wait 434872 00:37:28.745 11:23:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 434748 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 434748 ']' 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 434748 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 434748 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 434748' 00:37:28.745 killing process with pid 434748 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@967 -- # kill 434748 00:37:28.745 11:23:43 keyring_linux -- common/autotest_common.sh@972 -- # wait 434748 00:37:29.311 00:37:29.311 real 0m4.829s 00:37:29.311 user 0m9.487s 00:37:29.311 sys 0m1.594s 00:37:29.311 11:23:43 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:29.311 11:23:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:29.311 ************************************ 00:37:29.311 END TEST keyring_linux 00:37:29.311 ************************************ 00:37:29.311 11:23:43 -- common/autotest_common.sh@1142 -- # return 0 00:37:29.311 11:23:43 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:29.311 11:23:43 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:29.311 11:23:43 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:29.311 11:23:43 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:29.311 11:23:43 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:29.311 11:23:43 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:29.311 11:23:43 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:29.311 11:23:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:29.311 11:23:43 -- common/autotest_common.sh@10 -- # set +x 00:37:29.311 11:23:43 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:29.311 11:23:43 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:29.311 11:23:43 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:29.311 11:23:43 -- common/autotest_common.sh@10 -- # set +x 00:37:31.213 INFO: APP EXITING 00:37:31.213 INFO: killing all VMs 00:37:31.213 INFO: killing vhost app 00:37:31.213 INFO: EXIT DONE 00:37:32.147 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:32.147 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:32.147 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:32.147 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:32.147 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:32.147 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:32.147 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:32.147 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:32.147 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:32.147 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:32.147 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:32.147 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:32.147 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:32.147 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:32.147 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:32.147 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:32.147 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:33.522 Cleaning 00:37:33.522 Removing: /var/run/dpdk/spdk0/config 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:33.522 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:33.522 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:33.522 Removing: /var/run/dpdk/spdk1/config 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:33.522 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:33.522 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:33.522 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:33.522 Removing: /var/run/dpdk/spdk2/config 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:33.522 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:33.522 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:33.522 Removing: /var/run/dpdk/spdk3/config 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:33.522 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:33.522 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:33.522 Removing: /var/run/dpdk/spdk4/config 00:37:33.522 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:33.522 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:33.522 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:33.522 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:33.522 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:33.522 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:33.780 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:33.780 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:33.780 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:33.780 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:33.780 Removing: /dev/shm/bdev_svc_trace.1 00:37:33.780 Removing: /dev/shm/nvmf_trace.0 00:37:33.780 Removing: /dev/shm/spdk_tgt_trace.pid115842 00:37:33.780 Removing: /var/run/dpdk/spdk0 00:37:33.780 Removing: /var/run/dpdk/spdk1 00:37:33.780 Removing: /var/run/dpdk/spdk2 00:37:33.780 Removing: /var/run/dpdk/spdk3 00:37:33.780 Removing: /var/run/dpdk/spdk4 00:37:33.780 Removing: /var/run/dpdk/spdk_pid114295 00:37:33.780 Removing: /var/run/dpdk/spdk_pid115024 00:37:33.780 Removing: /var/run/dpdk/spdk_pid115842 00:37:33.780 Removing: /var/run/dpdk/spdk_pid116278 00:37:33.780 Removing: /var/run/dpdk/spdk_pid116963 00:37:33.780 Removing: /var/run/dpdk/spdk_pid117105 00:37:33.780 Removing: /var/run/dpdk/spdk_pid117817 00:37:33.780 Removing: /var/run/dpdk/spdk_pid117828 00:37:33.780 Removing: /var/run/dpdk/spdk_pid118070 00:37:33.780 Removing: /var/run/dpdk/spdk_pid119257 00:37:33.780 Removing: /var/run/dpdk/spdk_pid120163 00:37:33.780 Removing: /var/run/dpdk/spdk_pid120357 00:37:33.780 Removing: /var/run/dpdk/spdk_pid120543 00:37:33.780 Removing: /var/run/dpdk/spdk_pid120745 00:37:33.780 Removing: /var/run/dpdk/spdk_pid120934 00:37:33.780 Removing: /var/run/dpdk/spdk_pid121102 00:37:33.780 Removing: /var/run/dpdk/spdk_pid121364 00:37:33.780 Removing: /var/run/dpdk/spdk_pid121542 00:37:33.780 Removing: /var/run/dpdk/spdk_pid121743 00:37:33.780 Removing: /var/run/dpdk/spdk_pid124126 00:37:33.780 Removing: /var/run/dpdk/spdk_pid124288 00:37:33.780 Removing: /var/run/dpdk/spdk_pid124454 00:37:33.780 Removing: /var/run/dpdk/spdk_pid124574 00:37:33.780 Removing: /var/run/dpdk/spdk_pid124883 00:37:33.780 Removing: /var/run/dpdk/spdk_pid125008 00:37:33.780 Removing: /var/run/dpdk/spdk_pid125321 00:37:33.780 Removing: /var/run/dpdk/spdk_pid125442 00:37:33.780 Removing: /var/run/dpdk/spdk_pid125604 00:37:33.780 Removing: /var/run/dpdk/spdk_pid125625 00:37:33.780 Removing: /var/run/dpdk/spdk_pid125789 00:37:33.781 Removing: /var/run/dpdk/spdk_pid125916 00:37:33.781 Removing: /var/run/dpdk/spdk_pid126283 00:37:33.781 Removing: /var/run/dpdk/spdk_pid126436 00:37:33.781 Removing: /var/run/dpdk/spdk_pid126637 00:37:33.781 Removing: /var/run/dpdk/spdk_pid126803 00:37:33.781 Removing: /var/run/dpdk/spdk_pid126944 00:37:33.781 Removing: /var/run/dpdk/spdk_pid127015 00:37:33.781 Removing: /var/run/dpdk/spdk_pid127167 00:37:33.781 Removing: /var/run/dpdk/spdk_pid127440 00:37:33.781 Removing: /var/run/dpdk/spdk_pid127597 00:37:33.781 Removing: /var/run/dpdk/spdk_pid127752 00:37:33.781 Removing: /var/run/dpdk/spdk_pid127946 00:37:33.781 Removing: /var/run/dpdk/spdk_pid128183 00:37:33.781 Removing: /var/run/dpdk/spdk_pid128336 00:37:33.781 Removing: /var/run/dpdk/spdk_pid128497 00:37:33.781 Removing: /var/run/dpdk/spdk_pid128722 00:37:33.781 Removing: /var/run/dpdk/spdk_pid128928 00:37:33.781 Removing: /var/run/dpdk/spdk_pid129081 00:37:33.781 Removing: /var/run/dpdk/spdk_pid129233 00:37:33.781 Removing: /var/run/dpdk/spdk_pid129450 00:37:33.781 Removing: /var/run/dpdk/spdk_pid129668 00:37:33.781 Removing: /var/run/dpdk/spdk_pid129827 00:37:33.781 Removing: /var/run/dpdk/spdk_pid129979 00:37:33.781 Removing: /var/run/dpdk/spdk_pid130250 00:37:33.781 Removing: /var/run/dpdk/spdk_pid130420 00:37:33.781 Removing: /var/run/dpdk/spdk_pid130573 00:37:33.781 Removing: /var/run/dpdk/spdk_pid130731 00:37:33.781 Removing: /var/run/dpdk/spdk_pid130917 00:37:33.781 Removing: /var/run/dpdk/spdk_pid131167 00:37:33.781 Removing: /var/run/dpdk/spdk_pid133280 00:37:33.781 Removing: /var/run/dpdk/spdk_pid186771 00:37:33.781 Removing: /var/run/dpdk/spdk_pid189281 00:37:33.781 Removing: /var/run/dpdk/spdk_pid196835 00:37:33.781 Removing: /var/run/dpdk/spdk_pid200002 00:37:33.781 Removing: /var/run/dpdk/spdk_pid202287 00:37:33.781 Removing: /var/run/dpdk/spdk_pid202742 00:37:33.781 Removing: /var/run/dpdk/spdk_pid206627 00:37:33.781 Removing: /var/run/dpdk/spdk_pid210485 00:37:33.781 Removing: /var/run/dpdk/spdk_pid210487 00:37:33.781 Removing: /var/run/dpdk/spdk_pid211136 00:37:33.781 Removing: /var/run/dpdk/spdk_pid211690 00:37:33.781 Removing: /var/run/dpdk/spdk_pid212339 00:37:33.781 Removing: /var/run/dpdk/spdk_pid212732 00:37:33.781 Removing: /var/run/dpdk/spdk_pid212741 00:37:33.781 Removing: /var/run/dpdk/spdk_pid212996 00:37:33.781 Removing: /var/run/dpdk/spdk_pid213133 00:37:33.781 Removing: /var/run/dpdk/spdk_pid213139 00:37:33.781 Removing: /var/run/dpdk/spdk_pid213792 00:37:33.781 Removing: /var/run/dpdk/spdk_pid214328 00:37:33.781 Removing: /var/run/dpdk/spdk_pid214988 00:37:33.781 Removing: /var/run/dpdk/spdk_pid215383 00:37:33.781 Removing: /var/run/dpdk/spdk_pid215386 00:37:33.781 Removing: /var/run/dpdk/spdk_pid215651 00:37:33.781 Removing: /var/run/dpdk/spdk_pid216536 00:37:33.781 Removing: /var/run/dpdk/spdk_pid217257 00:37:33.781 Removing: /var/run/dpdk/spdk_pid222713 00:37:33.781 Removing: /var/run/dpdk/spdk_pid223247 00:37:33.781 Removing: /var/run/dpdk/spdk_pid225998 00:37:33.781 Removing: /var/run/dpdk/spdk_pid229683 00:37:33.781 Removing: /var/run/dpdk/spdk_pid231743 00:37:33.781 Removing: /var/run/dpdk/spdk_pid237993 00:37:33.781 Removing: /var/run/dpdk/spdk_pid243179 00:37:33.781 Removing: /var/run/dpdk/spdk_pid244380 00:37:34.055 Removing: /var/run/dpdk/spdk_pid245080 00:37:34.055 Removing: /var/run/dpdk/spdk_pid255223 00:37:34.055 Removing: /var/run/dpdk/spdk_pid257427 00:37:34.055 Removing: /var/run/dpdk/spdk_pid282117 00:37:34.055 Removing: /var/run/dpdk/spdk_pid285015 00:37:34.055 Removing: /var/run/dpdk/spdk_pid286695 00:37:34.055 Removing: /var/run/dpdk/spdk_pid287949 00:37:34.055 Removing: /var/run/dpdk/spdk_pid288026 00:37:34.055 Removing: /var/run/dpdk/spdk_pid288158 00:37:34.055 Removing: /var/run/dpdk/spdk_pid288297 00:37:34.055 Removing: /var/run/dpdk/spdk_pid288624 00:37:34.055 Removing: /var/run/dpdk/spdk_pid289935 00:37:34.055 Removing: /var/run/dpdk/spdk_pid290535 00:37:34.055 Removing: /var/run/dpdk/spdk_pid290963 00:37:34.055 Removing: /var/run/dpdk/spdk_pid292468 00:37:34.055 Removing: /var/run/dpdk/spdk_pid292873 00:37:34.055 Removing: /var/run/dpdk/spdk_pid293432 00:37:34.055 Removing: /var/run/dpdk/spdk_pid295824 00:37:34.055 Removing: /var/run/dpdk/spdk_pid299072 00:37:34.055 Removing: /var/run/dpdk/spdk_pid302617 00:37:34.055 Removing: /var/run/dpdk/spdk_pid326239 00:37:34.055 Removing: /var/run/dpdk/spdk_pid328884 00:37:34.055 Removing: /var/run/dpdk/spdk_pid332640 00:37:34.055 Removing: /var/run/dpdk/spdk_pid333582 00:37:34.055 Removing: /var/run/dpdk/spdk_pid334532 00:37:34.055 Removing: /var/run/dpdk/spdk_pid337078 00:37:34.055 Removing: /var/run/dpdk/spdk_pid339349 00:37:34.055 Removing: /var/run/dpdk/spdk_pid343510 00:37:34.055 Removing: /var/run/dpdk/spdk_pid343533 00:37:34.055 Removing: /var/run/dpdk/spdk_pid346396 00:37:34.055 Removing: /var/run/dpdk/spdk_pid346923 00:37:34.055 Removing: /var/run/dpdk/spdk_pid347287 00:37:34.055 Removing: /var/run/dpdk/spdk_pid347552 00:37:34.055 Removing: /var/run/dpdk/spdk_pid347607 00:37:34.055 Removing: /var/run/dpdk/spdk_pid348643 00:37:34.055 Removing: /var/run/dpdk/spdk_pid349859 00:37:34.055 Removing: /var/run/dpdk/spdk_pid351061 00:37:34.055 Removing: /var/run/dpdk/spdk_pid352219 00:37:34.055 Removing: /var/run/dpdk/spdk_pid353479 00:37:34.055 Removing: /var/run/dpdk/spdk_pid354674 00:37:34.055 Removing: /var/run/dpdk/spdk_pid358355 00:37:34.055 Removing: /var/run/dpdk/spdk_pid358805 00:37:34.055 Removing: /var/run/dpdk/spdk_pid360083 00:37:34.055 Removing: /var/run/dpdk/spdk_pid360820 00:37:34.055 Removing: /var/run/dpdk/spdk_pid364485 00:37:34.055 Removing: /var/run/dpdk/spdk_pid366385 00:37:34.055 Removing: /var/run/dpdk/spdk_pid369788 00:37:34.055 Removing: /var/run/dpdk/spdk_pid373160 00:37:34.055 Removing: /var/run/dpdk/spdk_pid380065 00:37:34.055 Removing: /var/run/dpdk/spdk_pid384410 00:37:34.055 Removing: /var/run/dpdk/spdk_pid384413 00:37:34.055 Removing: /var/run/dpdk/spdk_pid396616 00:37:34.055 Removing: /var/run/dpdk/spdk_pid397028 00:37:34.055 Removing: /var/run/dpdk/spdk_pid397494 00:37:34.055 Removing: /var/run/dpdk/spdk_pid397957 00:37:34.055 Removing: /var/run/dpdk/spdk_pid398527 00:37:34.055 Removing: /var/run/dpdk/spdk_pid398944 00:37:34.055 Removing: /var/run/dpdk/spdk_pid399349 00:37:34.055 Removing: /var/run/dpdk/spdk_pid399758 00:37:34.055 Removing: /var/run/dpdk/spdk_pid402251 00:37:34.055 Removing: /var/run/dpdk/spdk_pid402389 00:37:34.055 Removing: /var/run/dpdk/spdk_pid406172 00:37:34.055 Removing: /var/run/dpdk/spdk_pid406297 00:37:34.055 Removing: /var/run/dpdk/spdk_pid408062 00:37:34.055 Removing: /var/run/dpdk/spdk_pid413489 00:37:34.055 Removing: /var/run/dpdk/spdk_pid413494 00:37:34.055 Removing: /var/run/dpdk/spdk_pid416385 00:37:34.055 Removing: /var/run/dpdk/spdk_pid417785 00:37:34.055 Removing: /var/run/dpdk/spdk_pid419181 00:37:34.055 Removing: /var/run/dpdk/spdk_pid419922 00:37:34.055 Removing: /var/run/dpdk/spdk_pid421329 00:37:34.055 Removing: /var/run/dpdk/spdk_pid422200 00:37:34.055 Removing: /var/run/dpdk/spdk_pid427515 00:37:34.055 Removing: /var/run/dpdk/spdk_pid427860 00:37:34.055 Removing: /var/run/dpdk/spdk_pid428252 00:37:34.055 Removing: /var/run/dpdk/spdk_pid429808 00:37:34.055 Removing: /var/run/dpdk/spdk_pid430204 00:37:34.055 Removing: /var/run/dpdk/spdk_pid430485 00:37:34.055 Removing: /var/run/dpdk/spdk_pid432915 00:37:34.055 Removing: /var/run/dpdk/spdk_pid432934 00:37:34.055 Removing: /var/run/dpdk/spdk_pid434381 00:37:34.055 Removing: /var/run/dpdk/spdk_pid434748 00:37:34.055 Removing: /var/run/dpdk/spdk_pid434872 00:37:34.055 Clean 00:37:34.055 11:23:48 -- common/autotest_common.sh@1451 -- # return 0 00:37:34.055 11:23:48 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:34.055 11:23:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:34.055 11:23:48 -- common/autotest_common.sh@10 -- # set +x 00:37:34.314 11:23:48 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:34.314 11:23:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:34.314 11:23:48 -- common/autotest_common.sh@10 -- # set +x 00:37:34.314 11:23:48 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:34.314 11:23:48 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:34.314 11:23:48 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:34.314 11:23:48 -- spdk/autotest.sh@391 -- # hash lcov 00:37:34.314 11:23:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:34.314 11:23:48 -- spdk/autotest.sh@393 -- # hostname 00:37:34.314 11:23:48 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:34.314 geninfo: WARNING: invalid characters removed from testname! 00:38:06.393 11:24:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:06.393 11:24:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:08.951 11:24:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:12.243 11:24:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:14.780 11:24:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:18.076 11:24:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:20.624 11:24:34 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:20.624 11:24:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:20.624 11:24:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:20.624 11:24:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:20.624 11:24:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:20.624 11:24:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.624 11:24:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.624 11:24:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.624 11:24:34 -- paths/export.sh@5 -- $ export PATH 00:38:20.624 11:24:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.624 11:24:34 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:20.624 11:24:34 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:20.624 11:24:34 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720689874.XXXXXX 00:38:20.624 11:24:34 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720689874.NfbEXr 00:38:20.624 11:24:34 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:20.624 11:24:34 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:38:20.624 11:24:34 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:20.624 11:24:34 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:20.624 11:24:34 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:20.624 11:24:34 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:20.624 11:24:34 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:20.624 11:24:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:20.624 11:24:34 -- common/autotest_common.sh@10 -- $ set +x 00:38:20.624 11:24:34 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:20.624 11:24:34 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:20.624 11:24:34 -- pm/common@17 -- $ local monitor 00:38:20.624 11:24:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.624 11:24:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.624 11:24:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.624 11:24:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:20.624 11:24:34 -- pm/common@21 -- $ date +%s 00:38:20.624 11:24:34 -- pm/common@21 -- $ date +%s 00:38:20.624 11:24:34 -- pm/common@25 -- $ sleep 1 00:38:20.624 11:24:34 -- pm/common@21 -- $ date +%s 00:38:20.624 11:24:34 -- pm/common@21 -- $ date +%s 00:38:20.624 11:24:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720689874 00:38:20.624 11:24:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720689874 00:38:20.624 11:24:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720689874 00:38:20.624 11:24:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720689874 00:38:20.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720689874_collect-vmstat.pm.log 00:38:20.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720689874_collect-cpu-load.pm.log 00:38:20.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720689874_collect-cpu-temp.pm.log 00:38:20.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720689874_collect-bmc-pm.bmc.pm.log 00:38:21.565 11:24:35 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:21.565 11:24:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:21.565 11:24:35 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:21.565 11:24:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:21.565 11:24:35 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:21.565 11:24:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:21.565 11:24:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:21.565 11:24:35 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:21.565 11:24:35 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:21.565 11:24:35 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:21.565 11:24:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:21.565 11:24:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:21.565 11:24:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:21.565 11:24:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:21.565 11:24:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.565 11:24:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:21.565 11:24:35 -- pm/common@44 -- $ pid=446743 00:38:21.565 11:24:35 -- pm/common@50 -- $ kill -TERM 446743 00:38:21.565 11:24:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.565 11:24:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:21.565 11:24:35 -- pm/common@44 -- $ pid=446745 00:38:21.565 11:24:35 -- pm/common@50 -- $ kill -TERM 446745 00:38:21.565 11:24:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.565 11:24:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:21.565 11:24:35 -- pm/common@44 -- $ pid=446747 00:38:21.565 11:24:35 -- pm/common@50 -- $ kill -TERM 446747 00:38:21.565 11:24:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:21.565 11:24:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:21.565 11:24:35 -- pm/common@44 -- $ pid=446774 00:38:21.565 11:24:35 -- pm/common@50 -- $ sudo -E kill -TERM 446774 00:38:21.565 + [[ -n 9904 ]] 00:38:21.565 + sudo kill 9904 00:38:21.575 [Pipeline] } 00:38:21.593 [Pipeline] // stage 00:38:21.597 [Pipeline] } 00:38:21.613 [Pipeline] // timeout 00:38:21.618 [Pipeline] } 00:38:21.634 [Pipeline] // catchError 00:38:21.639 [Pipeline] } 00:38:21.655 [Pipeline] // wrap 00:38:21.661 [Pipeline] } 00:38:21.676 [Pipeline] // catchError 00:38:21.684 [Pipeline] stage 00:38:21.686 [Pipeline] { (Epilogue) 00:38:21.700 [Pipeline] catchError 00:38:21.701 [Pipeline] { 00:38:21.715 [Pipeline] echo 00:38:21.716 Cleanup processes 00:38:21.722 [Pipeline] sh 00:38:22.008 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.008 446880 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:22.008 447009 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.023 [Pipeline] sh 00:38:22.310 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.310 ++ grep -v 'sudo pgrep' 00:38:22.310 ++ awk '{print $1}' 00:38:22.310 + sudo kill -9 446880 00:38:22.322 [Pipeline] sh 00:38:22.607 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:32.590 [Pipeline] sh 00:38:32.879 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:32.879 Artifacts sizes are good 00:38:32.896 [Pipeline] archiveArtifacts 00:38:32.904 Archiving artifacts 00:38:33.656 [Pipeline] sh 00:38:33.941 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:33.958 [Pipeline] cleanWs 00:38:33.969 [WS-CLEANUP] Deleting project workspace... 00:38:33.969 [WS-CLEANUP] Deferred wipeout is used... 00:38:33.976 [WS-CLEANUP] done 00:38:33.978 [Pipeline] } 00:38:33.999 [Pipeline] // catchError 00:38:34.013 [Pipeline] sh 00:38:34.301 + logger -p user.info -t JENKINS-CI 00:38:34.310 [Pipeline] } 00:38:34.326 [Pipeline] // stage 00:38:34.331 [Pipeline] } 00:38:34.348 [Pipeline] // node 00:38:34.353 [Pipeline] End of Pipeline 00:38:34.379 Finished: SUCCESS